Neue Daw Ripx für Remixer

Das scheint ein Zwischending zwischen Spectral Layers und Melodyne zu sein. Interessant wäre, zu erfahren, was für eine Qualität die Stem Separierung im Vergleich zu Spectral Layers oder zum Demucs Projekt hat.
Es gibt einen Crossgrade Preis gegen Spectral Layers und Melodyne.
 
Das scheint ein Zwischending zwischen Spectral Layers und Melodyne zu sein. Interessant wäre, zu erfahren, was für eine Qualität die Stem Separierung im Vergleich zu Spectral Layers oder zum Demucs Projekt hat.
Es gibt einen Crossgrade Preis gegen Spectral Layers und Melodyne.
RipX bringt Verwendbarere Samples hervor als Spectral Layers das von Haus aus tut, aber bei Spectral Layers kann man gut nachjustieren und dann ebenfalls solche Ergebnisse erzielen, bei RipX kann man das aber auch und RipX hat ebenfalls einen Spectral Layer Editor. Jedenfalls sind die beiden auch die Top Tools in dem Bereich, wobei es auch genügend Menschen gibt die mit dem was FL Studio anbietet vollkommen glücklich sind.
 
Habe als Ausgangsmaterial eine E Gitarre, die achtel spielt.
in RipX werden die Noten nicht richtig erkannt, bei Melodyne schon.
(Export to Midi)

Ripx erstellt anstelle der 8 achtel, 3 Noten mit unterschiedlicher Länge
 
Scheint ein Altbekanntes Problem zu sein Zitat:
I've been using RipX for a while now. I happened to be Googling for some information on an issue I experienced and hit this thread. Since I'm a Fractal user I figured I'd add my $.02 since I've come to some conclusions about it's capabilities, especially as related to what they market.

Stem separation is pretty good, but a mixed bag. IMO, close enough for government work, but if you're looking for perfection, you're gonna need to roll up your sleeves and get your hands dirty. If you're looking to get rid of vocals, keyboards and guitars to get the rhythm tracks it will be pretty good. Solid B+. Although as you listen to each isolated stem, it's going to sound really funny/filtered/modulated in some cases. More on that below. You can kind of fix this, but as they allude to(but never come out and say) you have to start manually editing the little artifacts if you're looking to do detail work. Bottom line, the more stuff in a track and the busier the parts the less precise it's going to be.

It's obviously making decisions on where to place things based on frequency ranges and positions, so if you hit a spot where all the instruments are going at once and there's not a lot of space be prepared for dropouts and little bits that don't sound quite right. And outright misassignments. For example, seeing pieces of the guitar parts assigned to the Guitar Layer and some of the others dumped into the Violin Layer. And keyboards? Flip a coin. So if you want the whole song, just without one guitar part, as in that previous case you'd need to separate everything out and then edit the individual layers. Removing the guitar part you didn't want artifact by artifact and leaving the keys and second guitar.

Is this possible? Sure it is. Do you necessarily want to do that? I don't, at least not in this application. If you want to do shit manually, you can get Spectra Layers by Steinberg and get a set of editing tools that are far superior to this. What you get is kind of primitive - you've essentially got drag/drop, cut/paste, scissors/glue, which for most people is about as far as they are going to go. And here's the other thing: just try listening to one little isolated artifact that goes by in a matter of milliseconds and decide what to do with it. Move it here, move it there, move it sideways or back again. Glue it to the one before or split it and move the parts elsewhere? You could make one track your life's work, and given I'm pushing 70 that's a very limited period. This is the easy, just push the button version so it does more stuff automatically but makes the 20% part of the 80/20 rule even harder to get. Depends on what you want.

If you're looking at it for karaoke, it's awesome if you don't care about backing vocals. Advance to Go and collect $200.

For learning songs it's A- to B+ depending on what you want. For hearing an isolated part, it works very well considering all the guitar parts are going to be merged into one or two stems most of the time. So getting the vocals, drums and bass out of the mix is certainly going to help. If you're a bass player or drummer, the news is better because you'll get superior separation. But as with the guitar stems you're going to see it make some weird stem decisions. It's not unusual to see 3 parts produced: drums, kick, and percussion and see only a few note artifacts in one of them. In which case you can just always select that or manually move one up to a different layer if it bothers you.

But once you get your chosen stem(s), I get them out of that environment. Export the stem to audio and then use your DAW and something like Transcribe or Slow Downer to do the actual transcription. You can't assign hot keys, and the options for setting the starting point, looping, changing pitch/tempo are minimalist to non-existent. You can't assign keystrokes either. I use a MIDI footswitch and a Bome Midi Translator for hands free operation and this is pretty much useless in that regard. Again, basic stuff but Power Users need not apply.

Export is a mixed bag as well. Exporting to audio works reasonably well - just be aware that if you're going to export a stem and listen to it in isolation, it's often not going to sound like a track you just recorded yourself. In the medium it was separated from it has been EQ'd, limited, processed, etc. so that when heard in isolation it's not dry unless it was recorded that way. Once you add other instruments back in you may not even notice this, but it's certainly going to be a YMMV proposition. Exporting Midi has similar caveats. Bass works pretty well. Guitars and keys - don't even bother unless you are going to do a lot of manual editing prior. Drums are in the middle, but most of the time skewed more towards the guitar/keys end of the spectrum. I originally thought, "Great way to get drum grooves. Just export as Midi and play with Superior Drummer," Didn't work out that way. Versions before 6.0.3 this didn't work at all, You imported and always got a message about a corrupt Midi file. In 6.0.3 it now imports, but when you play it the drum track is complete unrecognizable.

Now I will say that if you take a guitar or vocal track and drop it into something like EZ Drummer's tracker utility you can get some interesting things. Especially if you're trying to write something with a similar feel to a particular song but your own take. Although strangely enough dropping an isolated audio drum track never got me anything all that great. One more reason why the AI moniker everyone uses should probably be changed to AMI - Artificial Mediocre Intelligence.

As for help after purchase... I actually contacted their support about the drum Midi export thing. I was told that it looks like some of the drum hits were combined and some of the cymbal strikes were seen as multiple notes. And then said to just use the split/join to edit so it works. No specifics on where they saw or heard this. Or an example of how they would apply that solution that would result in a correct Midi export. I did go back and check the Midi output of the song I provided as an example and from what was playing could not determine where it sounded like drum hits got combined or one cymbal hit sounded like multiples. In fact, the Midi file it exported played like a couple of people falling down the stairs with a garbage can in each hand. Checking their forum shows the same kind of answers. So for the support, be prepared for what I call software gaslighting - where the response will generally suggest something they would not be willing or able to do themselves and imply it's your problem for being lazy or incompetent without actually stating it in that language. It's there to hit some software metrics driven SLA and worded to make it look like they were really trying to help if there's a dissatisfied response.

I'm just assuming here that most in this forum will not be interested in the Hit 'n Mix claims that you can use this to remix a full track or correct individual errors. If you are, you can probably infer the effectiveness from above commentary. Strictly a YMMV situation, although to be fair if you wanted to apply some additional processing or editing to an isolated range of sounds and you knew exactly where it was going to be you'd have some utility. But be aware all the examples and demos they use are extremely limited in scope and were obviously developed specifically for this purpose. I can just hear them discussing this stuff before they do it: "Hey, make sure and get all the keyboard stuff and a couple of those other guitar parts out of there so it looks easier." These demos are a lot like when a real estate agent shows you a house, raves about the remodeled bathroom, marble tile, and granite counter, but then you flush the toilet and it overflows.

My advice would be hold off until you can get it for the 30% off sale they often run. Don't go for 20% because last year they ran an Early Black Friday sale at 20% but then on Black Friday dropped it to 30%. I did get them to adjust the price, but similar to support they kind of made it sound like it was my fault for thinking Early Black Friday would be the same as Black Friday and they were just adjusting it out of the goodness of their hearts. My guess is that sometimes they run it at 20% and if it doesn't look like enough people are biting they drop it.

Thumbs up, but keep your expectations managed.
 
Wozu?
Remixe, Remixe. Das Netz ist doch schon voll mit Remixen und Cover. Auf Youtube täglich irgendwelche Videos, wo jemand Synthesizersounds bekannter Songs explizit nachbaut und ganze Lieder Spur für Spur nachspielt. Alle separieren nur noch. Sind die Leute heute so unkreativ, dass sie keine eigene Musik mehr hinkriegen, frage ich mich da oft. Vielleicht ist es ja auch fürs gute Gefühl, man hätte den einen erfolgreichen tollen Song damals selber produziert, indem man die Stimme solo schalten kann. Merkwürdige Zeiten. Ungefragt irgendwas wo rausstemmen und verwenden ist ja auch weiterhin so ein heißes Thema. Aber vielleicht ist das ja die DAW Zukunft, jetzt genau andersrum, fertige Songs auf Einzelspuren zu entmischen. Auch ein schönes Hobby und Zeitvertreib dieses Reverse Engineering, und vor allem viel weniger anstrengend als ein Instrument erst mühsam erlernen zu müssen und sich was Neues auszudenken.
 
Wozu?
Remixe, Remixe. Das Netz ist doch schon voll mit Remixen und Cover. Auf Youtube täglich irgendwelche Videos, wo jemand Synthesizersounds bekannter Songs explizit nachbaut und ganze Lieder Spur für Spur nachspielt. Alle separieren nur noch. Sind die Leute heute so unkreativ, dass sie keine eigene Musik mehr hinkriegen, frage ich mich da oft. Vielleicht ist es ja auch fürs gute Gefühl, man hätte den einen erfolgreichen tollen Song damals selber produziert, indem man die Stimme solo schalten kann. Merkwürdige Zeiten. Ungefragt irgendwas wo rausstemmen und verwenden ist ja auch weiterhin so ein heißes Thema. Aber vielleicht ist das ja die DAW Zukunft, jetzt genau andersrum, fertige Songs auf Einzelspuren zu entmischen. Auch ein schönes Hobby und Zeitvertreib dieses Reverse Engineering, und vor allem viel weniger anstrengend als ein Instrument erst mühsam erlernen zu müssen und sich was Neues auszudenken.

Also alleine für Mash-Ups braucht man das ja schon, wenn es keine Einzelspuren gibt. Außerdem ist es für Coverbands super, weil man dann die Kleinigkeiten viel besser raushören kann. Und man kann viel lernen, wenn man mal die Einzelspuren ohne weiteren Kontext hören kann.

Und es macht einfach verdammt viel Spaß, aus einem bekannten Song was eigenes dazu zu basteln.
 
...oder mal eben auf dem Stereo Master nur bei einer separierten Stimme noch eine Auftrennung in Transienten zu machen, um z.B. nur auf diese einen Hall zu legen. Zumindest bei Spectral Layers kann man sowas machen. Danach dann einfach die aufgetrennte Aufnahme wieder zusammenfügen. Das ist ziemlich praktisch, wenn man nur eine Stereoaufnahme hat (z.B. bei Konzertmitschnitten auf einem Fieldrecorder)
 
...oooder...man kann jetzt bei einem Song, dessen Gesang schon seit über dreissig Jahren einen nervt, mittels AI den Gesang separieren und den endlich neu einsingen ;-) Da mache ich mich mal in den nächsten Wochen ran...
 
Auch ein schönes Hobby und Zeitvertreib dieses Reverse Engineering, und vor allem viel weniger anstrengend als ein Instrument erst mühsam erlernen zu müssen und sich was Neues auszudenken.

Aber das ist heutzutage leider bei allem so. Schau mal in die videogame branche jedes jahr minimum ein neues call of duty und fifa drölfundfvierzig. Alte schinken werden geremastert und geremaked...

Irgendwann hat der mensch wohl alle ideen durch
 

Ähnliche Themen

Lacunaflow
Antworten
59
Aufrufe
2K
ollo123
ollo123
N
Antworten
0
Aufrufe
120
Nobelharris
N
ideeundklang
Antworten
0
Aufrufe
173
ideeundklang
ideeundklang

Neue Antworten


Oft gelesene Themen

Zurück
Oben