One distinctive feature Laravel has as part of its optional architecture is the command/event system. Commands are supposed to represent single, discrete pieces of logic that ought to handle a particular job. Events are ways to notify the system that something occurred. Both commands and event employ handlers and commands can implement the SelfHandling interface to combine containing the data of a Command object along with executing the command itself. While the ideas are great, the implementation still is messy and I want to talk about some flaws in the architecture that bluntly stated don’t make a lot of sense to me.
I’ve encountered the idea of Command objects before. Simply put they represent actions given some input that do something. In essence, they, in some ways, replace your actions on controllers so you can increase your reusability in your code by isolating your logic in a separate unit. As an example, you could call a command from a job or a different service interface if the command’s job was to create a blog post.
The way Commands are executed are through the Command Bus. In truth, the Command Bus really isn’t necessary and you can simply instantiate a Command and run it’s handle() method directly if you implement the SelfHandling interface. On average, you probably will use the artisan make:command to generate a command and this will automatically create the skeleton code with the SelfHandling method implemented.
For me, I felt that this mechanism, while having a good intent, feels as though it’s been lazily implemented. I can see why from a high level abstraction principal how one would desire to separate the Command data from the Command processing. However, in practice the Command object itself is merely a useless Data Transfer Object (DTO) and doesn’t truly have much reuse by itself in my experience without the handle method directly attached. I know that part of the intent is to be able to map Request Forms onto something like a DTO which can then be applied to a Model. But at this point, it feels as though the extra DTO layer provided by the isolated Command object just becomes too cumbersome.
Another area where I feel suffers in weak implementation is the whole Command Bus and notion of a pipeline. I suppose I get the main idea of how these two things ought to work together. However, the documentation detailing exactly how to combine these elements in practice is really lacking on the Laravel website. Part of the problem is that Commands in design are not supposed to return anything (this is regarding the handle method). If you examine the dispatch() method though, you can return a value once you execute your command. So that part of the design really confuses me.
Also, you have the pipeline which in my mind is similar to how pipes in a Unix system work; that is, you’re chaining commands together where the output of one command somehow gets mapped into the next one. However, if the recommended practice is to avoid having a return type, this design becomes null. So how exactly can you map the output of one command into the next one?
When I did some research, I found some hand rolled solutions that took the commands and chained them together. But why isn’t this better highlighted in the native architecture? Is this something that the developers are still working out?
I think part of the problem is that Commands as of now are pretty simply in essence. You pretty much provide a constructor where you toss in any inputs into the command. Then you process them through the handle() method. There isn’t any object returned the way you can return a view in a controller action. I think if there’s one major aspect that Command objects need to revisit in terms of architecture is to force people to implement some sort of return type such another DTO with information like error messaging, additional data similar to some RESTful interfaces and flags for allowing the next command to continue processing. Then provide mechanisms to inject things like logging, transactions, etc.
Right now, the way I handle dealing with command chaining is to avoid it altogether. I really want the people designing Laravel to come up with a standard architecture that improves upon this idea. I know for 5.1 they will be changing commands into jobs. While this does improve some clarity in the intent of what a command is supposed to do, this really seems more focused on the console aspect. We still don’t get some form of chained processing, which is what I really would like to see.
Rather than chaining commands though, what I do is fire Events. In some way, it’s almost like command chaining except you’re essentially trying to inform the system that a particular thing is happening. This part of the system gets a little tricky in terms of design because it may feel confusing at times when one ought to use commands over events and vice versa.
So the way I think about commands vs events is that commands are things that I want to happen right now. On the other hand, events are something that happened and I’m shouting it out to the world to react to it. So a great example of this occurs when a new user registers. In most systems where a user registers, the big parts of the processing are first validating that the data is valid then storing that data. Here, you can use a Request object to handle the form validation. The command can then do the job of persisting the data. In most cases, that might be only a single table, but it also might affect things such as adding a role to the user, signing that user in, adding notification settings for that user, etc.
In this situation, where do we draw the line of what to put in the command object? My rule of thumb is that if there’s some sort of immediate required feedback, then put that part of the processing into the command. Here, persisting the user’s information into a users table as well as adding a role to their account so that they can be auto-signed can all be part of the command.
However, sending an email to confirm that the user has created an account does not require immediate feedback and can be put into a background job. So your event would be just to inform the system that a user just has been created. You probably would store the successful results of the created row into the event because it might contain an id column required for part of the email response.
Like the command object though, the event/event handler has a similar flaw in that the event object itself is just another DTO. I do think it has a little more meaning in the way it can be used since you can queue events. However, the thing I dislike is just the somewhat informal way of creating events and their respective handlers. You’re still using a more or less useless object to transfer particular sets of data to something else that will process it. This makes commands and events feel as though they overlap in purpose somewhat and a little redundant of one another, not to mention adding the overhead of confusion with more layers that become obfuscated.
The other thing is that like Commands, Event Handlers themselves are not supposed to return anything. One common thread I read is that not having a return type makes unit testing a lot more difficult. From my own experience, this aspect gets compounded once you have Commands firing events that fire even more events.
Lastly, one of the biggest flaws I see in this architecture is the potential for getting into an infinite loop. There really isn’t anything that prevents one from having an event call a command and vice versa. Worse, if you’re in a situation where you designed your system poorly, you have the potential of having them calling each other. And there are times where I can see myself wanting to call a Command object inside my Event handler just because I want to avoid replicating logic.
Overall, I get the architectural desires that these methods provide. However, both need a serious clean up with better recommended practices and perhaps even combining both somehow for cases where the logic in one section needs to be shared somehow.