Find out which allocators the DirectShow filters are using?


In my DVR media player projects, I made two filters: one source filter(push) to read data from DVR, and one decoder (transform) filter to decode the video. Most allocator work is taken care of by DirectShow base classes. I dig deeper into the allocators when I was doing renderless mode.
The question is: How many allocators are in the graph and where are they from?
There are two, one from decoder’s input pin, and one from VMR’s input pin.
Here are my findings by stepping into the base class code.
First allocator (used by source filter):

CBaseOutputPin::DecideAllocator(IMemInputPin *pPin, __deref_out IMemAllocator **ppAlloc)
//*ppAlloc is the allocator the source filter's output pin will be using.
//*ppAlloc is from decoder's input pin. 
//It's created (by CreateMemoryAllocator, instead of re-used) beause decoding isn't inplace
//instead of using allocator from next input pin, this pin can override InitAllocator to use it's own.
//*ppAlloc is apporved by DecideBufferSize()

Second allocator (used by decoder filter):

CBaseOutputPin::DecideAllocator(IMemInputPin *pPin, __deref_out IMemAllocator **ppAlloc)
//*ppAlloc is the allocator the decoder filter's output pin will be using.
//*ppAlloc is from VMR's input pin.
//*ppAlloc is apporved by DecideBufferSize()

Summary: for a non-inplace transform filter, in most circumstances, it’s input pin provides allocator for it’s upstream output pin, and it’s outpin uses the allocator from it’s downstream’s input pin.

Find out the length of array


void TestLengthArray()
{
	enum {E_ARRAY_LEN = 2};
	int *a = new int[E_ARRAY_LEN];
	size_t iSize = _msize(a) / sizeof a[0];
	_ASSERT(iSize == E_ARRAY_LEN); //here we got the length of the array
}

MSDN didn’t mention if _msize can be used for ‘new’; it just seems working.
This is not supported by the standard, but good to know it’s possible at runtime.

Why EXACTLY does the compiler need help from dllimport?


extern "C" __declspec(dllimport) void __stdcall func(); //func() is in a DLL file and called by an executable

Many articles say that dllimport makes more efficient code by bypassing a jump.
But why is there a jump at the first place?
Short answer: because there is not enough room for the linker to fill in with a function pointer.

Long answer:
the compiler doesn’t know whether func() is from dll or not, and it assumes it’s not(to save space and time). The compiler leaves a slot at the spot where func() is called for the linker to fill in with func()’s address. If it’s a non-dll function, no problem. If it IS a dll function, the linker doesn’t know the function’s address; it only knows the pointer pointing to this address(IAT entry). Here is the linker’s solution: using a stub.
The linker fills the slot with the address of the stub. The stub’s code has JMP to jump to where the function pointer points to(in IAT).
This explains why stub(jump) is needed. Of course if the compiler knows it’s a dll function, it doesn’t need the stub, because the compiler can simply leave the slot big enough for the linker to fill it with the function pointer(and ‘DWORD PTR’, but ignore it to keep it simple).

Can “multiple interface inheritance” replace “virtual inheritance”?


Obviously this is C#’s approach, replacing “virtual inheritance” with “multiple interface inheritance”.
Are there any drawbacks other than that no data member is allowed in interfaces?
Yes and No. There are cons and pros with “multiple interface inheritance”, and they roughly cancel out.

struct CMultiBase
{
	virtual void f() = 0;
};
struct CMultiBase1: CMultiBase
{
	virtual void f1() = 0;
};

struct CMultiBase2: CMultiBase
{
	virtual void f2() = 0;
};

struct CMultiSub: CMultiBase1, CMultiBase2
{
	void f()  { }
	void f1() { }
	void f2() { }
};
void TestMulti()
{
	CMultiSub s;
	CMultiBase1 *p1 = &s;
	CMultiBase2 *p2 = &s;
	unsigned int * pRawDword1 = reinterpret_cast<unsigned int*>(p1);
	unsigned int * pRawDword2 = reinterpret_cast<unsigned int*>(p2);
	cout << "first entry in CMultiBase1's vtable is " << hex << *(int*)(*pRawDword1) << endl; 
		//This is CMultiSub::f as seen from watch window (expand p1 then CMultiBase1) 
	cout << "first entry in CMultiBase2's vtable is " << hex << *(int*)(*pRawDword2) << endl; 
		//This is [thunk]:CMultiSub::f as seen from watch window
}

From the output above, we can see:
1. CMultiSub’s vtable has two entries for f(). There is only one f() in case of ‘virtual inheritance’.
2. There are only 2 vtable pointers in CMultiSub (located at pRawDword1 and pRawDword1), while there are three vtable pointers in case of ‘virtual inheritance’: one extra for virtual base.
Summary: “multiple interface inheritance” takes less space by taking out one vfptr, and takes more space by duplicated entries in vtable.
So the final answer is yes. No wonder both C# and Java took this approach.