There are scores of teleprompter-apps out there for Android but it's hard to get a beamsplitter in any size like 300x400mm without shelling out >100 Euro.
("Free Teleprompter" doesn't change the font-size on IceCreamSandwich and is thus unusable. Sorry.)
What I did was to use my Nexus Galaxy in it's Otterbox outdoor case.
The case has an extremely sturdy belt clip that can even rotate the phone.
So I combined a short strip of wood that has the exact maximum width of the belt, the belt clip and securely hook on to.
This way the belt clip securely gribs all around the wood and cannot move.
Then I drilled a 1/4 inch hole and positioned this piece of wood using a magic arm (chinese clone) attached to the hot shoe of the camera.
Result: It works very very well and takes up virtually no space at all.
With the PanLeica 25mm lens (slightly less then 50mm without crop) the eye movement and direction of viewing works well when the shot covers head and upper body.
With a closeup covering only the head you notice that I am actually looking towards a point above the lens.
You'd need a longer lens (like 50mm or 85mm) and position the camera further away to do this.
...or an actual beamsplitter mirror to have the text in front of the lens.
CCC meeting was great.
Had to pick up the workshop tools they borrowed from me to create a lockable wall to lock away expensive equipment.
The Hackerspace is half way done and there are enough people interested in getting our radio show going again.
I do have all the technology but work demands that I leave Freiburg for about two years.
So I can't help much except in doing interviews on CCC events.
All that talking about the podcasts makes me want to do another show on my sleeping YouTube channel myself....
I'm unsure about standing on front of a camera again because of my imperfect English...
He'll have two mentors including me and I hope this works out well.
Haven't done this before.
Having done Java half of my life and Android for the last 2+ years it sounded like a sensible thing to look at...
The thing I'm currently looking at is localisation.
I'm quite underwelmed.
It really looks like the current way to design UIs are storyboard files.
They are a newly introduced mechanism. So I can expect Apple to have learned from any past mistakes.
They are graphically edited XML files that contain the UI elements, their connection to the Code and all strings.
...They don't support any localisation.
From what I figured out the way to translate them it so basically let XCode (the IDE) create a full copy of that file and you replace all texts in there.
This means that any added, moved or otherwise modified UI-element needs to be done again and again for every language your project features.
Just plain dumb!
No one but an american could have come up with such nonsense in localisation.
It now has a menu to allow filtering the lists on devices that have no menu-button anymore.
(Formerly you long-pressed the menu button to filter lists that implement the Filterable interface.)
I also added support for GoogleTV. It should appear in the market on the TV any time now.
(No sideload nonsense.)
The plug on the first one is a bit shaky because I had to resolder it multiple times until I found that there was a hidden second ground cable from the battery-contacts to the flash head.
So I was wondering why it would charge and everything but not flash when switching from the battery contacts to external power and thus accidentally having the flash head itself float without ground.
I'm using a 6V 2.25mAh wall wart style universal power supply with overload protection.
Others have used 1.4A and 2.8A and it seems that you can go way higher then 2.8A to get even faster recycle times.
I am not sure I want to go there as I find this more then fast enough and don't want to overheat my flash head.
I changed that theme a bit by using a stapler.
I took off the filtes I wanted to use most often and their description sheet.
Then I turned around the filter (because the puch holes are not perfectly centered) and stapled on the description to the filter, so it would be visible.
This way it's easy to identify a specific filter even though many of the colors look alike when you don't have a direct comparison.
A 2.2A 6V wall wart seems to work just fine.
I guess I'll get another such wall wart, some power-sockets and switches to switch between external power and battery on monday, then modify both my flashes.
There also seems to be enough space in there to integrate my 4 channel RF remote triggers directly into the body of these.
...then it occured to me that I do have a workshop downstairs.
So I started building some 2cm and 5cm grid lights for my 2 YN460-II slave flashes.
(Need to do the same for my Metz 58 AF-2 since it's bigger and they don't fit on both kinds of flashes.)
|unmodified YN460-II at 1/128|
|with 2cm grid added|
|with 5cm grid added|
I volunteered to be a mentor for one a possible student working on Android apps for OpenStreetMap.
There seems to be a proposal to create something more geared towards inexperienced mappers based on the Vespucci code of which I took over for the retired maintainer years ago.
The Google Open Source Programs Team"
I guess the next release will feature these changes.
This posting is constantly being updated and rewritten with details as they come up.
- So far no luck with the OpenBench logic sniffer.
- Can't figure out if the buffered inputs support 3.3V signals or only 5V signals. We are trying to verify this by connecting and disconnecting the on-board 3.3V supply to an input-pin to get a known signal.
- But we are out of chocolate and coffee!!!
- I fetched my pocket-oscilloscope from home but forgot 2 cables.
- Contacted "Nussgipfel" for an oscilloscope because he holds an oscilloscope-workshop tonight. So he must have a working device. Hope to verify that the signal is 3.3V and count it's frequency/bitrate to get the OpenBench to work on decoding it.
- Hacked my pocket-Oscilloscope. Found a 5V signal (-0.5 to 4.5V and -1 to 4V) in the 600-800Hz range.
- Decoded the signal
- Batteries died while decoding the LED signals. Buttons are decoded.
- Trying to find Nussgipfel again to make screenshots of the undistorted waveforms and document my findings below.
- Found a second signal being transmitted with >100ms delay after the first signal. Need help analysing it.
- checked the signal using a larger scope. Seems I had GND and Signal confused on the small one. Low<->High. May be RS232 after all? With start+stop -bit the signal checks out. 2.400 = 417 µs per bit seems to match our 0,4ms per bit.
- Making a break to eat some fondue down in the huuuuuuge bunker below this building. Planning to use a larger logic analyser later.
The 4 connections between RC04 remote and H4n are labeled 3.3V, RX, TX and GND.
The single chip on the remote is labeled "D78f0500A"
It could be an NEC microcontroller µPD78f0500.
The number of pins and the pins connected to RX, TX and SCK seem to match.
The datasheet is in Korean but what I can make out is that this should be a 5MHz microcontroller that can run on 3.3V and 5V. No details from the datasheet cast and light on the strange encoding used(described below).
What I found out about the protocol being transmitted by the RC04 remote on the 2 lines "RX" and "TX" when certain buttons are pressed on the remote or the Zoom H4n lights up certain LEDs is as follows:
The protocoll is RS232 at 3.3V with 2400bps 8n1
The remote sends 2 sequences of 2 bytes with a small delay:
Record: 0x81 0x00 | 0x80 0x00
Play: 0x82 0x00 | 0x80 0x00
Stop: 0x84 0x00 | 0x80 0x00
ffwd: 0x88 0x00 | 0x80 0x00
rwd: 0x90 0x00 | 0x80 0x00
vol+: 0x80 0x08 | 0x80 0x00
vol-: 0x80 0x10 | 0x80 0x00
rec+: 0x80 0x20 | 0x80 0x00
rec-: 0x80 0x40 | 0x80 0x00
mic : 0x80 0x01 | 0x80 0x00
ch1 : 0x80 0x02 | 0x80 0x00
ch2 : 0x80 0x04 | 0x80 0x00
It receives a single byte that is a bitmask of the LEDs to light up:
? && 0x01 = record LED
? && 0x10 = MIC LED
? && 0x60 == CH1+CH2 LED = 0x20 + 0x40
? && 0x20 = CH1 green
? && 0x40 = CH2 green
? && 0x04 = CH1 red 0x16?
? && 0x08 = CH2 red
? && 0x24 = CH1 yellow (red+green)
? && 0x48 = CH2 yellow (red+green)
- DS0201 "DSO Nano" pocket oscilloscope (manual)
- Photos (Sorry, rejected photos where uploaded too for some reason)
- Photos of the RC04 for the H4n
- All Videos (please subscribe)
- Andreas made a radio remote for the Zoom H4n based on these findings.
Next step: implement this in an ATTiny13 using a softUart. Maybe use a CMOS 4019 /
4052 or MAX4619 to trigger something else too.
What I plan to do is to play with one on EasterHegg in Basel this weekend and reverse engeneer the serial protocoll used by the RC04 remote control of the Zoom H4n audio recorder.
With that information I should be able to program a Microcontroller to start not only 1-3 cameras but also the sound recording in perfect sync with one button on the right handle of my camera rig.
(2 cameras for 3D, 3 cameras for interviews, usually only 1 camera+1x4 channels of sound)
During this time the Chaos Computer Club Freiburg (of which I am a founding member) managed to create it's own Hackerspace.
Now for the first time I get to see it...
It is located in a former subterran walkway that has been extended and fittet with ventilation, power, internet, security system,...
I'm now downloading the Samsung SmartTV SDK 3.1.1.
Let's see how difficult it is to develop apps for my own TV set and get them into Samsung's market.
The nearly 300MB are still downloading...done.
Well...it's a 300MB ZIP-file containing.... an exe file.
Nowhere did it state that Windows was required.
On the contrary. "Specs &Features" says "Linux 2.6".
Quote of the day:
"If you want to make sync between TV and SDK, check and install apache server"
Not only does it install a full blown Apache server.
It does not let you choose to install only SmartTV 2011 and not 2010 and 2012.
It clutters the desktop with 4 new icons. One for each emulator and one for an "Apps Editor" that ...requires administrator priviliges.
The dialog says: "Manufacturer: Unknown" but I have to give it admin permissions.
Next dialog: Windows firewall complains about it.
The screen resembles XCode pretty much.
No example projects to open. ...this could make it hard to get a grip on how things should work.
In the next step it asks me a number of settings without any help as to their meaning and useless names like "cpname", "cplogo", "mgrver" or "dcont"="y".
There is a UI designer and it does reflect simple changes of the code in the UI.
I can create buttons and other UI elements. What I can't do it double-click or right-click on a button to get to the code that handles the click.
I can run it in an emulator and debug in an emulator.
No setting to run this on my Samsung TV.
Also triggers a Windows Firewall warning....twice.
I wonder why I did not get a choice as to what emulator to run. It just opens the 2012 one even if I want to test on a 2011 TV set.
There is a menu "Run Active Project...". Apart from the wrong capitalisation the elipsis ("...") should indicate that a dialog will open. ...doesn't happen. Just the 212 emulator again.
The documentation tells me how to package my app. Aparently separate packages need to be created for Europe, America, Asia, Africa and "Others".
It does not tell me how to instal and run that package on my TV.
However I did find a blog posting about it: here
Problem is: That seems to be for 2010 TV sets. My 2011 TV does have a completely different menu and no "server" or "development" setting in there.
You need to loose all stored passwords on your TV (not again...50+ character passwords are a PAIN to type on a TV remote). Instructions are here.
Looks like you have to create a new account with the special name "develop" that no one cared to tell you about. Then a special menu will suddenly apear.
In the menu you can set your development PC as the server. You have to enter an IPv4. There is no space if your network uses IPv6 .
- download the SDK
- Running your app on an actual TV set
- Running on a 2011 TV set
- Stackoverflow postings
It works really well to navigate through the local file system and multiple Dropbox accounts using the large GoogleTV remote.
The password is only entered once per account and the large screen can support the existing tablet-UI showing local and remote side at the same time.
What I found missing was a program to do basic editing of 3D photos.
- white balance
- saturation, levels, brightness, contrast,...
- crop, scale and rotate to make some horizontal/vertical line in the photo truely horizontal/vertical
- save in the original MPO-format as well as SideBySide+Red/Cyan-Anaglyph.
After someone commented that Photoshop CS6 spports MPO files (containers with 2 JPEG-images used for 3D photos), I check it out.
Bridge CS6 doesn't offer "Photoshop" or "Camera Raw" as "open with...".
Bummer. I can only open it with the Preview -app of MacOS from within Bridge.
Then I opened the MPO -files in the Photoshop CS6 trial.
They open in a "3D-layer" where I can switch between Anaglyph and SideBySide.
However apart form HDR-toning I can't do anything.
nothing involving colors at all.
I am offered to crop+turn the image but for turning a) I need to input the rotation in degrees instead of marking a horizontal/vertical line b) it asks me to convert from 3D into a SmartObject. I can't convert back to 3D.
Trimming a 3D image displayed as side-by-side trims the SBS-image, not the left+the right image, so it destroys the image.
So yes, it can open them but I can't DO anything with it once I opened it.