Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Store Mocap data Instead of Using it in Real Time #18

Open
alezuky opened this issue Nov 4, 2020 · 3 comments
Open

Store Mocap data Instead of Using it in Real Time #18

alezuky opened this issue Nov 4, 2020 · 3 comments

Comments

@alezuky
Copy link

alezuky commented Nov 4, 2020

Hello. I have been trying to store the mocap data (bones coordinates, for example) into a vector or a file instead of using it to animate the model in real time. To do that, instead of playing a video with the Play() command and rendering it with a camera with the target texture to a image to be used as the base texture to the BarracudaRunner script, I am using a "for" loop to run through the frames of the video, extract their textures and render them with the same camera that updates the Main Texture, which is used as the base texture to the BarracudaRunner. It is basically the same concept that is being used now at the code, but, instead of playing the video (which analyzes around one frame per update cycle), I am trying to run through all the frames in one update cycle.

My trial works in rendering the Main Texture with the same proprieties than the original code. However, when the BarracudaRunner code tries to use the image in the ExecuteModelAsync() Coroutine, I get one of this error for each frame of the video:

KeyNotFoundException: The given key was not present in the dictionary.
System.Collections.Generic.Dictionary`2[TKey,TValue].get_Item (TKey key) (at <437ba245d8404784b9fbab9b439ac908>:0)
Unity.Barracuda.GenericVars.PeekOutput (System.String name) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/GenericWorker.cs:1004)
Unity.Barracuda.GenericVarsWithReuse.PeekOutput (System.String name) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/GenericWorker.cs:1106)
Unity.Barracuda.GenericVars.GatherInputs (Unity.Barracuda.Layer forLayer) (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/GenericWorker.cs:951)
Unity.Barracuda.GenericWorker+d__29.MoveNext () (at Library/PackageCache/[email protected]/Barracuda/Runtime/Core/Backends/GenericWorker.cs:176)
UnityEngine.SetupCoroutine.InvokeMoveNext (System.Collections.IEnumerator enumerator, System.IntPtr returnValueAddress) (at <480508088aee40cab70818ff164a29d5>:0)

Would anyone figure out what could be causing the problem?


I will let below a copy of the code I am using to run through the frames and try to extract information.
Obs. I jut run this code after the model is ready to work with the code.


using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.Video;
using UnityEngine.UI;

public class ExtractsPartitions : MonoBehaviour
{

public VideoPlayer videoPlayer;
public VideoCapture videoCapture;
public VNectBarracudaRunner barracudaRunner;
public VNectModel referenceAvatar;

private RenderTexture videoTexture;


public bool extractFrames = true;

public bool framesToExtract= true;
public bool framesExtracting = false;
public bool framesExtracted = false;

public long startingFrame = 0;
public long finalFrame = 0;


Camera camera;


void Update()
{
    if (extractFrames)
    {
        if (framesToExtract && !videoPlayer.isPrepared)
        {
            InitMainTexture();
            PrepareFrames();

            framesToExtract = false;
        }

        if (videoPlayer.isPrepared)
        {                
            framesExtracting = true;

            ExtractPartitions();

            framesExtracting = false;
            framesExtracted = true;
        }

        if (framesExtracted)
        {   
            extractFrames = false;
        }
    }        
}



private void PrepareFrames()
{

    videoPlayer.renderMode = VideoRenderMode.APIOnly;
    videoPlayer.Prepare();
    videoPlayer.sendFrameReadyEvents = true;        
}    



private void ExtractPartitions()
{       

    videoPlayer.Pause();

    long framecount = (long)videoPlayer.frameCount;

    if (startingFrame < 0 || startingFrame > framecount || startingFrame > finalFrame)
    {
        startingFrame = 0;
    }

    if (finalFrame <= 0 || finalFrame > framecount || finalFrame < startingFrame)
    {
        finalFrame = framecount;
    }

    long numberOfFrames = finalFrame - startingFrame;

    for (long realFrame = 0; realFrame < numberOfFrames; realFrame++)
    {
        videoPlayer.frame = realFrame + startingFrame;
        
        videoTexture = (RenderTexture)videoPlayer.texture;     

        videoCapture.GetComponent<Renderer>().material.mainTexture = videoTexture;

        camera.Render();

        barracudaRunner.UpdateVNectModel();

        referenceAvatar.PoseUpdate();      
		
    }

}

void InitMainTexture()
{

    videoTexture = new RenderTexture((int)videoPlayer.clip.width, (int)videoPlayer.clip.height, 24);

    videoPlayer.targetTexture = videoTexture;

    var sd = videoCapture.VideoScreen.GetComponent<RectTransform>();
    sd.sizeDelta = new Vector2(videoCapture.videoScreenWidth, (int)(videoCapture.videoScreenWidth * videoPlayer.clip.height / videoPlayer.clip.width));
    videoCapture.VideoScreen.texture = videoTexture;

    var aspect = (float)videoTexture.width / videoTexture.height;

    videoCapture.VideoBackground.transform.localScale = new Vector3(aspect, 1, 1) * videoCapture.VideoBackgroundScale;
    videoCapture.VideoBackground.GetComponent<Renderer>().material.mainTexture = videoTexture;


    GameObject go = new GameObject("MainTextureCamera", typeof(Camera));

    go.transform.parent = videoCapture.transform;
    go.transform.localScale = new Vector3(-1.0f, -1.0f, 1.0f);
    go.transform.localPosition = new Vector3(0.0f, 0.0f, -2.0f);
    go.transform.localEulerAngles = Vector3.zero;
    go.layer = videoCapture._layer;

    camera = go.GetComponent<Camera>();
    camera.orthographic = true;
    camera.orthographicSize = 0.5f;
    camera.depth = -5;
    camera.depthTextureMode = 0;
    camera.clearFlags = CameraClearFlags.Color;
    camera.backgroundColor = Color.black;
    camera.cullingMask = videoCapture._layer;
    camera.useOcclusionCulling = false;
    camera.nearClipPlane = 1.0f;
    camera.farClipPlane = 5.0f;
    camera.allowMSAA = false;
    camera.allowHDR = false;

    videoCapture.MainTexture = new RenderTexture(videoCapture.bgWidth, videoCapture.bgHeight, 0, RenderTextureFormat.RGB565, RenderTextureReadWrite.sRGB)
    {

        useMipMap = false,
        autoGenerateMips = false,
        wrapMode = TextureWrapMode.Clamp,
        filterMode = FilterMode.Point,

        graphicsFormat = UnityEngine.Experimental.Rendering.GraphicsFormat.B5G6R5_UNormPack16,

    };


    camera.targetTexture = videoCapture.MainTexture;
}

}

@yukihiko
Copy link
Contributor

sorry, I don't have time to read your source. But I've done the same thing before, so try your best to make it.

@yavuzselimyayla
Copy link

Any update on this? @alezuky

@alezuky
Copy link
Author

alezuky commented Dec 16, 2020

Not yet...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants