When? When, please, audio tracks?

2

Comments

  • @hisdudeness said:
    now that Iphone version is out, any update on this upcoming IAP?

    I don’t think there will be a public timeline from Matt anymore, these updates will land when they are ready. It’s frustrating to wait, I know, it’s frustrating for me too – the beta team could see that the iPhone release was super near but we couldn’t tell anyone, even when we could see people really desperately wanted it to be released! But at least when it’s released, it will be polished and ready. And the last two builds of this update really took the UI to another level.

    Matt spends a lot of time developing and streamlining the UI. I think it’s time well spent as Nanostudio is known for its UI, but it is a lot of time all the same.

    So, the only thing we can safely say is that audio tracks is the next big update in line at the moment. Before that, there might be maintenance updates.

  • Sorry, late to this thread, but I wanted to add a reason I like audio tracks over the current workarounds - you can see the waveform.

    I have built an Obsidian patch which allows me to play a long sample (for example a whole mixed down track) using one long note. So I can, with some additional effort, include a long sample from an external instrument (can't sing to save myself, so no vocals yet :grimace: ) but the best I can do to visualize that sample on the timeline is to colour and name the clip, and that makes it harder to manage if there are several of these audio sections.

    Of course I cannot edit the audio easily within the timeline, apart from volume edits to the sample.

    There are advantages to using Obsidian as the sample playing engine, by the way. I managed to make a "tape degradation" effect which is fun, by changing the pitch on the sample using an LFO and adding some noise... I think I also automated the global filter a little to simulate variable tape quality, I can't remember.

    If anyone's keen I could set up a Patchstorage account and leave some of these here. They are definitely workarounds, but if anyone would use or adapt them them why not? Just let me know.

    I personally don't think I would use stretching and pitch correction, but I'm not sampling other works and hopefully not changing the BPM after recording.

  • @Trigger_the_Monkey said:
    They are definitely workarounds, but if anyone would use or adapt them them why not? Just let me know.

    I personally don't think I would use stretching and pitch correction, but I'm not sampling other works and hopefully not changing the BPM after recording.

    Thank you for the interesting workaround, and the cool trick "tape" effect :)

    I found some workaround too - I bough the MultiTrack AUv3 plug-in, and it helps me a lot. Sad but I can't record guitars or vocals `directly in NS2, but anyway this is the working solution.

    I don't care about "stretching", or editing with huge amount of tools. Most of the complex tasks can be done in side editing packages. I need to freeze synths (because this is the life-saving trick: if you like synth part - bounce it to WAV.), recording from internal NS2 tracks or NS2 buses, and split, trim, fade start/end. That is all.

    Hope audio tracks update is close to us. Very interesting to see what we get, because I see how precise and polished NS2 made.

  • @romanch said:

    I found some workaround too - I bough the MultiTrack AUv3 plug-in, and it helps me a lot. Sad but I can't record guitars or vocals `directly in NS2, but anyway this is the working solution.

    There are two ways to get around this:

    1. Record your guitar or vocals into a Slate pad or Obsidian sample OSC and then move the file into the AUv3’s audio pool.

    2. You can also record audio (vocals or guitar) in the standalone version of the AUv3 app and the recordings show up immediately in the audio pool of any instance inside NS. If you export a backing track from NS, you can work inside the multitracker until you are satisfied with the vocal / guitar and then move back into NS.

  • @Stiksi thank you very much. It sounds cool and needs to play with. I just bought my new iPad Pro, and not sure how to drag-and-drop or move files between apps. For example, I can’t drag and drop file from Files app to Cubasis 2 tracks, but I can do it between Files and MultiTrack audio-pool.

    Still learning new features. Thanks for the workaround!

  • @Stiksi I found how to do that in another theme on this forum. Files app can access NS2 data and this is the way. Then I can drag-and-drop files between the pool and NS2 folders.

  • @romanch said:
    @Stiksi I found how to do that in another theme on this forum. Files app can access NS2 data and this is the way. Then I can drag-and-drop files between the pool and NS2 folders.

    Yes, that was the piece I forgot, glad you found it! 😀

  • @Stiksi said:

    @romanch said:
    @Stiksi I found how to do that in another theme on this forum. Files app can access NS2 data and this is the way. Then I can drag-and-drop files between the pool and NS2 folders.

    Yes, that was the piece I forgot, glad you found it! 😀

    By the way I have the question: what is the pros and cons of using the Obsidian as “wav” player? I found it can record from Audio-bus and it is cool. I successfully recorded audio from Thor synth driven by midi from NS2. I am impressed. How many “wav” files loaded in Obsidian instances I can use for one song? Does Obsidian play “from disk” or it works only with RAM-loaded samples?

    I love how MultiTrack works, but of course it can’t send its 8 tracks to NS2 mixer channels, it has own mixer. You have wrote before, you play audio files and add the “tape” fx. How many files you usually use per song? Did you face a RAM limits?

    Cheers, Roman

  • what is the pros and cons of using the Obsidian as “wav” player?

    For me biggest advantage is that in combination with "sample start" automation you can get sort of "fake timestretch" - which allows you to change BPM of song where wav loop is still playing in sync with rest of track. Also when you rewind into middle of clip, it starts play immediately (in opposite to slate, if you trigger sample loop by one big note)

    Of course it has it's limits (usually it works good when you're increasing project tempo, but not that good when you re slowing down project tempo).

    Here is short video showcase, drum loop is resampled, loaded to obsidian and then tempo is automated - as you can hear, drum loop is still in sync with tempo ;)

    https://blipinteractive.co.uk/community/index.php?p=/discussion/355/little-trick-for-timestretched-drumloop

    How many “wav” files loaded in Obsidian instances I can use for one song?

    As much as you RAM can handle ;-) There is limit for 24 zones per one sample oscillator though (one zone == one sample)

    Does Obsidian play “from disk” or it works only with RAM-loaded samples?

    memory

  • edited November 2019

    @dendy Thanks! Nice one! Some kind of granular synthesis. A lot of combination can be used there - for example same notes with small time-offset on another lane. (I am just guessing, have no tests)

    Let me ask, when I use in Obsidian my recorded from Thor pads wav-file, there are small tiny click, or "restart" sound. Is only way to use 1 long note (instead many notes as in your example) to avoid it?
    Don't you know some trick how to avoid re-triggering for ADSR ? :)

  • edited November 2019

    hm, not sure what click you mean, eventually if you can upload simple project with just single track illustrating that issue i can check it and eventually find some solution...

  • edited November 2019

    @dendy said:
    hm, not sure what click you mean, eventually if you can upload simple project with just single track illustrating that issue i can check it and eventually find some solution...

    Thanks, I will send, but I am not sure it is good idea, because looks like I did something wrong. Looks like SampleStart curve has wrong speed.

    EDIT: "click" - I mean some kind of small repeating or twice playing some part of sample. It is more correct.

  • One thing to note when using Obsidian for sample playback is that unless you set your filter section to ”stereo”, it will play the samples in mono. So if you have a stereo sample, set your filters to stereo 🙂

    For me, the biggest advantage is the modulation department of Obsidian. You can do crazy or really subtle stuff with LFOs and for example the waveshaper in the filter section.

  • edited November 2019

    For starters you’ll probably want to trim the silence at the start of the sample. Let us know if you are unsure about how to do that and we’ll get you sorted. To my eye the sample looks a little odd compared to the MIDI. I.e. the velocity levels on the MIDI chords don’t change so the audio dynamic change seems odd to me, but I probably don’t understand what the intent is. Anyway, play the sample in the Audio Editor to be sure the sample is correct. Just to be sure.

  • edited November 2019

    ok, there was wrong couple of things.. basically everything :)

    here working example:
    https://www.dropbox.com/s/ds7qu0j1ygkv8q1/SampleStartAutomationFixed.nsa?dl=0

    what i did:

    1/ As @SlapHappy mentioned, i trimmed sample. I did not just removed silence from begining, but i also trimmed end to have it long exact number of bars (9 in this case). This is good practice for more easy setting of sample start automation curve, which needs to have exact same length as is sample length

    With enabled grid snapping to 1 bar it's easy like beeze. Sometimes you need add a bit silence to end of sample - in that case just duplicate some end part of sample and while duplicated part is still selected, hit Volume > Mute.

    2/ in Obsidian mod matrix, i added "knob1 > sample start". We want this knob to modulate sample start.

    3/ i resized clip to 9 bars, to match sample size

    4/ Now important part. Sample offset modulation is applied alway just with note start. So instead of one long note we need to draw serie of short notes, so thanks to sample start modulation every note will start on proper sample position and will play from there.
    Usually 16th notes works best, but based on material you use you can expetiment with shorter notes. Or longer - just always bear in mind that sample start automation is applied just when notebstarts.

    5/ Last thing - automation curve exactly matching pattern (and notes serie in pattern) length.

  • @romanch said:
    I love how MultiTrack works, but of course it can’t send its 8 tracks to NS2 mixer channels, it has own mixer.

    There’s nothing to stop you from using multiple instances with only one track inside, each on a separate NS2 track.

  • edited November 2019

    @Stiksi said:
    One thing to note when using Obsidian for sample playback is that unless you set your filter section to ”stereo”, it will play the samples in mono. So if you have a stereo sample, set your filters to stereo
    🙂

    For me, the biggest advantage is the modulation department of Obsidian. You can do crazy or really subtle stuff with LFOs and for example the waveshaper in the filter section.

    Cool, thanks! because I have no stereo on the Obsidian out :)

    To my eye the sample looks a little odd compared to the MIDI

    I suppose there is the problem in recording start position, something is wrong happens between AudioBus and NS2 when record. I need to play more with recording from AB to NS.

  • edited November 2019

    @dendy said:
    ok, there was wrong couple of things.. basically everything :)

    It is just the my strongest skill: "everything is wrong" B)

    Cool explanation @dendy! Thanks for the file and screenshots! Now sample sounds more smooth.

    Also as I can see, you used 8 voices polyphony for Obsidian instead of one voice (as I did on start), with pads samples it gives more smooth result.

    I need to play more with this technique. It looks very interesting. Curve of the sample start can be non-linear. It is the area for many experiments!

  • edited November 2019

    @number37 said:

    @romanch said:
    I love how MultiTrack works, but of course it can’t send its 8 tracks to NS2 mixer channels, it has own mixer.

    There’s nothing to stop you from using multiple instances with only one track inside, each on a separate NS2 track.

    And use one same MT project on every track? But solo one of MultiTrack mix channels? You mean that way?

    EDIT: It is doesn't matter I guess. Because I can create 2 or 3 MT projects and use them

  • @romanch
    Also as I can see, you used 8 voices polyphony for Obsidian instead of one voice (as I did on start), with pads samples it gives more smooth result.

    Oh, didn't changed poly .. actually, for this kind of workflow you don't need more than 2 voices. In some cases even 1 voice works better but 2 voices gives space for AMP envelope to fade smoothly.

    Sometimes it is good to play with AMP envelope ATTACK / RELEASE values to found sweet spot where it sounds most naturally.

    If you like experimentation, try change notes from C3 to different - it works like pitch shifting if you don't goo to much away from root C3 note you can use it for altering origianl melody in wav loop ;)
    This is what i like most on this workflow - it opens doors to experimentaations. You can try also play with automation curve to play recorded wav not in linear order from beginning to end but with various glitches and jumps in playhead position (from subtle changes up to squarepusher/aphex twin madness)

  • edited November 2019

    @dendy said:

    @romanch
    Also as I can see, you used 8 voices polyphony for Obsidian instead of one voice (as I did on start), with pads samples it gives more smooth result.

    Oh, didn't changed poly .. actually, for this kind of workflow you don't need more than 2 voices. In some cases even 1 voice works better but 2 voices gives space for AMP envelope to fade smoothly.

    Sometimes it is good to play with AMP envelope ATTACK / RELEASE values to found sweet spot where it sounds most naturally.

    If you like experimentation, try change notes from C3 to different - it works like pitch shifting if you don't goo to much away from root C3 note you can use it for altering origianl melody in wav loop ;)
    This is what i like most on this workflow - it opens doors to experimentaations. You can try also play with automation curve to play recorded wav not in linear order from beginning to end but with various glitches and jumps in playhead position (from subtle changes up to squarepusher/aphex twin madness)

    Yes, cool! this trick opens doors for a bunch of crazy things =)

  • @romanch said:
    EDIT: It is doesn't matter I guess. Because I can create 2 or 3 MT projects and use them

    This is what I meant. :)

  • Hi, well I’m late to the game too. For me the importance of audio tracks is first track freeze like Auria and Cubasis, I do large orchestrations sometimes and there is just not enough juice for my iPad Pro 4gb to run all those apps and fx at one time. Second I sometimes use only hardware synths and being able to record those into the DAW is crucial. I’m primarily a Cubasis user though I do own Auria Pro. The thing I like most about NS2 is I can import all my old projects midi files from Reaper with lots of time signature and tempo changes into NS2 and those changes are all there, that don’t happen with Cubasis or Auria. I’m willing to make NS2 my main DAW once audio, and track freeze is added which hopefully will be the case. As far as time stretching goes I do use it in Cubasis on a semi regular basis to get my .wav drum stems to fit .. so that would be stellar in NS2 also. I’m willing to pay for Audio as a IAP .. or for a whole new version of the app.

    C

  • edited December 2019

    I’m not sure people understand what is going on.

    Matt is working on audio tracks. He’s doing it. He is not NOT working on it. It’s not a question of if we all ask, THEN he’ll do it. He’s already doing it. It seems as though people think he needs convincing or something. He doesn’t. He just needs that precious commodity that there is so little of...

    Time.

    He’s one guy writing all the code that happens to be a perfectionist and we have benefited from that with the usability of NS2. I’m sure he wants audio tracks to have the same seamless feeling everything else does when you start working with it.

    I don’t understand why people seem to think... he’s not doing it :lol:

  • @drez said:
    I’m not sure people understand what is going on.

    Matt is working on audio tracks. He’s doing it. He is not NOT working on it. It’s not a question of if we all ask, THEN he’ll do it. He’s already doing it. It seems as though people think he needs convincing or something. He doesn’t. He just needs that precious commodity that there is so little of...

    Time.

    He’s one guy that is a perfectionist and we have benefited from that with the usability of NS2. I’m sure he wants audio tracks to have the same seamless feeling everything else does when you start working with it.

    I don’t understand why people seem to think... he’s not doing it :lol:

    Well said!

  • @drez said:
    I’m not sure people understand what is going on.

    Matt is working on audio tracks. He’s doing it. He is not NOT working on it. It’s not a question of if we all ask, THEN he’ll do it. He’s already doing it. It seems as though people think he needs convincing or something. He doesn’t. He just needs that precious commodity that there is so little of...

    Time.

    He’s one guy writing all the code that happens to be a perfectionist and we have benefited from that with the usability of NS2. I’m sure he wants audio tracks to have the same seamless feeling everything else does when you start working with it.

    I don’t understand why people seem to think... he’s not doing it :lol:

    So emotional... What is the problem to ask?
    Many new NS2 buyers one day face the inability to work with the audio tracks. It's a perfectly normal question, especially after you see it's a development priority.

    People love NS2, no one sees information about what's going on, so they ask. Their questions are just signals to development "we are waiting, it will be cool, we use NS2, it is a great app, we love it!".

    That's okay. Why don't they have to ask that question? Not everyone sees through walls like you.

    Just for an example: If I were a developer, it would be hard for me to make decisions about priorities. I would not be sure whether I should develop this new feature, or I don't, or I should develop something else, or I should develop another third application for the money. If I see reviews, and I see that people are waiting - it would be easier for me to make a decision. That would inspire me.

    It is cool. The community is growing, life continues.

  • edited December 2019

    @romanch
    Not everyone sees through walls like you.

    just to avoid missunderstandings, @drez is "just" satisfied user (who made two amazing tracks in NS2 :)) .. he has not more information than you ;)

    i want to ask everybody to keep this topic calm and easy.Lot of users, lot of workflows, lot of opinions.. all are equally valid

    As it was mentioned, audio tracks are top prio feature, nobody wants them to have finished more than Matt, he very much know how big demand is for this feature. He just wants to do it right, in solid stable reliable way consistent with rest of NS features...

  • I’m sure while we’re all patiently waiting with calm emotions, level heads and nothing but good intentions towards Matt we could use our time more wisely. Maybe we should sit down more at a piano or keyboard and practice practice practice , pick up your guitar even for a few minutes every day and improve that part that you always screw up when the record is activated, maybe sing a lot more to get better command of your so so voice.....see that’s not as painful as we think it is except maybe to the ones we live with who have to endure our obsession😁

  • @Arpseechord said:
    I’m sure while we’re all patiently waiting with calm emotions, level heads and nothing but good intentions towards Matt we could use our time more wisely. Maybe we should sit down more at a piano or keyboard and practice practice practice , pick up your guitar even for a few minutes every day and improve that part that you always screw up when the record is activated, maybe sing a lot more to get better command of your so so voice.....see that’s not as painful as we think it is except maybe to the ones we live with who have to endure our obsession😁

    Ha! I wish I did all those 😀 I do pick up the guitar and sometimes make incremental progress, though.

    Waiting is such sweet sorrow. No, wait , it was partitioning. Partitioning is such sweet…

  • @Stiksi said:

    @Arpseechord said:
    I’m sure while we’re all patiently waiting with calm emotions, level heads and nothing but good intentions towards Matt we could use our time more wisely. Maybe we should sit down more at a piano or keyboard and practice practice practice , pick up your guitar even for a few minutes every day and improve that part that you always screw up when the record is activated, maybe sing a lot more to get better command of your so so voice.....see that’s not as painful as we think it is except maybe to the ones we live with who have to endure our obsession😁

    Ha! I wish I did all those 😀 I do pick up the guitar and sometimes make incremental progress, though.

    Waiting is such sweet sorrow. No, wait , it was partitioning. Partitioning is such sweet…

    Perhaps Romeo and Juliet wouldn’t have ended so tragically if each would have had a
    modern device. They would have been so self absorbed and preoccupied they wouldn’t have even been aware of each other😉

Sign In or Register to comment.