TDD is freaking awesome!
Seriously though, for most projects, TDD is a great place to start!
Let's go on a TDD journey and see how it affects the design of a project!
So I'm working on a project, I have a reasonable idea of how I'm going to design the project and break up the abstractions. First I choose a central set of entities and begin writing tests for them.
Through doing TDD I guarantee that the project is easily testable, because I have to write tests if I choose the wrong part to test or way to test it my life is going to be painful, so I would quickly reevaluate making the wrong decisions because I'm not having TDD fun I'm having TDD pain. TDD will quickly tell you if you're doing it wrong by being extremely painful to write the tests.
As we go forward I follow my design and TDD enforces the testability of the design, and allows me to quickly refactor the design if I have miscalculated anything! Woohoo!
Another time, I'm building a new project, I have no clue about how to design it. I'm not really sure how it will interact with other systems. What the API should be and really where the abstractions lie internally.
This time I just choose what seems a reasonable place to wrap the logic and begin writing tests, I write the hackiest ugliest code in a single file to satisfy the tests and focus on the quality of the tests, not the actual code. After working on the project for some time it's really going to start creaking, there are tonnes of conditional nested logic and edge cases are starting to become quite painful. But at some point, it becomes quite easy to sit down and come up with a design that actually satisfies the project.
With design in hand, I can sit down bin off the file that I was using for the main code and leverage the tests to confirm that the new design works! GLORY!
Don't agree? Leave a comment!
Thursday, 30 August 2018
Sunday, 19 August 2018
using errors
Much error handling has to be tailored to the situation that suits it, at a minimum, you will want an overall error handler for an API call to handle unexpected errors and report them to the consumer and probably a log.
sometimes we can do something when an error occurs before it hits the default handler, in this case, we return cached data instead.
One important thing to note is the extra value that is added when using the Error class even when rejecting a promise.
This will mean that when we log out the error in the error handler we also get a stack trace, as when a new error is created it records the current stack.
In some cases, we may not be able to fix the problem but we may be able to add additional information to help us when reading the logs later.
now the log will contain the user id and save us some time when debugging later!
Some error cases may be something that we cannot do anything about but the user can, in this case, we are going to need to tell them about it, this hopefully saves us being contacted by the consumer/user and they can fix their own problem.
In this case, we create an additional error type that extends error so that we can bubble that up to be able to alert the user that their account is disabled.
sometimes we can do something when an error occurs before it hits the default handler, in this case, we return cached data instead.
One important thing to note is the extra value that is added when using the Error class even when rejecting a promise.
This will mean that when we log out the error in the error handler we also get a stack trace, as when a new error is created it records the current stack.
In some cases, we may not be able to fix the problem but we may be able to add additional information to help us when reading the logs later.
now the log will contain the user id and save us some time when debugging later!
Some error cases may be something that we cannot do anything about but the user can, in this case, we are going to need to tell them about it, this hopefully saves us being contacted by the consumer/user and they can fix their own problem.
In this case, we create an additional error type that extends error so that we can bubble that up to be able to alert the user that their account is disabled.
Saturday, 18 August 2018
express error handling
Here are a few ideas for error handling in an express app.
For all cases, you will want every route to handle any errors that can occur so that we can notify the caller and potentially do some logging.
Now, this can become quite tedious and repeated so we can pass the error to expresses default error handler!
This will cause express to pass the error out to the client, which in most cases is probably not what we want, so let's have a look at replacing the default error handling middleware.
This now gives us control to be able to add some default logging and return the status we want as well as and information we want to show to the consumer.
One really important thing to remember is to put the function after you define your route handlers! Or it won't catch the errors. (at the bottom of the file)
There are a few different types of things you might be doing in the routes so here are a few examples of how to pass the errors.
This one just handles synchronous code that throws errors.
In this case, we see an example of how to pass callback based errors.
And then moving into promises, as long as we make sure that we pass the error to next, the error handler middleware will be activated.
More on error handling strategies to come!
For all cases, you will want every route to handle any errors that can occur so that we can notify the caller and potentially do some logging.
Now, this can become quite tedious and repeated so we can pass the error to expresses default error handler!
This will cause express to pass the error out to the client, which in most cases is probably not what we want, so let's have a look at replacing the default error handling middleware.
This now gives us control to be able to add some default logging and return the status we want as well as and information we want to show to the consumer.
One really important thing to remember is to put the function after you define your route handlers! Or it won't catch the errors. (at the bottom of the file)
There are a few different types of things you might be doing in the routes so here are a few examples of how to pass the errors.
This one just handles synchronous code that throws errors.
In this case, we see an example of how to pass callback based errors.
And then moving into promises, as long as we make sure that we pass the error to next, the error handler middleware will be activated.
More on error handling strategies to come!
Friday, 10 August 2018
async/await in node.js: error handling
So, let's have a quick look at async await error handling!
Like last time we are wrapping our code in a self-executing function so we can use the async keywords.
To handle errors in async functions we use a try-catch block like we normally would in synchronous code. You can see from the example that a rejected promise comes out of our catch block.
As we know from previous posts a promise will be rejected if reject() is called or an error is thrown inside of the promise.
Also if you throw an error in an async function this results in a rejected promise. So you can see now how it all fits together, any promise rejections or errors thrown result in the catch block being used.
Just worthy to note that non-awaited rejected promises will not trigger the catch block but will put up an unhandled promise rejection error.
Like last time we are wrapping our code in a self-executing function so we can use the async keywords.
To handle errors in async functions we use a try-catch block like we normally would in synchronous code. You can see from the example that a rejected promise comes out of our catch block.
As we know from previous posts a promise will be rejected if reject() is called or an error is thrown inside of the promise.
Also if you throw an error in an async function this results in a rejected promise. So you can see now how it all fits together, any promise rejections or errors thrown result in the catch block being used.
Just worthy to note that non-awaited rejected promises will not trigger the catch block but will put up an unhandled promise rejection error.
Thursday, 9 August 2018
async/await in node.js
Async await is a syntax that makes your asynchronous code look more like synchronous code, it is basically some nice syntactic sugar built on top of Promises.
You just add the async keyword to your function and or expression and that method will now return a promise! You can then use the await keyword to get the result of the promise!
One thing to note is await can only be used inside of an async function, so if you want to execute the code just in a file you need to wrap it in a self-executing function.
I know what you're thinking, "This is great so I can stop writing those weird promises now right?", well no. Though you can quickly create synchronous-looking code that returns promises in either resolved or rejected state.
This does not allow for the handling of asynchronous code that is not promise based inside of your function.
Unfortunately, this will not work as the function will just return a promise that has undefined or no value. To get around this we need to wrap our callback code in a promise as before.
This can now be used in an async function!
You can also make good use of util.promisify to help you with this!
Just remember it will only work for functions where the parameters match the node pattern of callback last and the callback starts with the error.
You can see in this case that by defining our callback method to the node standard util.promisify happily wraps it in a promise.
In this case, I removed the parameter passed to doTest() this results in a "TypeError: callback is not a function" error being thrown, which will be very confusing if you didn't write the callback code you are wrapping and it doesn't have very good error messages.
More on async/await soon!
You just add the async keyword to your function and or expression and that method will now return a promise! You can then use the await keyword to get the result of the promise!
One thing to note is await can only be used inside of an async function, so if you want to execute the code just in a file you need to wrap it in a self-executing function.
I know what you're thinking, "This is great so I can stop writing those weird promises now right?", well no. Though you can quickly create synchronous-looking code that returns promises in either resolved or rejected state.
This does not allow for the handling of asynchronous code that is not promise based inside of your function.
Unfortunately, this will not work as the function will just return a promise that has undefined or no value. To get around this we need to wrap our callback code in a promise as before.
This can now be used in an async function!
You can also make good use of util.promisify to help you with this!
Just remember it will only work for functions where the parameters match the node pattern of callback last and the callback starts with the error.
You can see in this case that by defining our callback method to the node standard util.promisify happily wraps it in a promise.
In this case, I removed the parameter passed to doTest() this results in a "TypeError: callback is not a function" error being thrown, which will be very confusing if you didn't write the callback code you are wrapping and it doesn't have very good error messages.
More on async/await soon!
Wednesday, 8 August 2018
Promises in node.js: nesting promise patterns
Join me for a little exploration in some patterns that can be used with promises, we will mainly play around with different ways to pass data through promise chains and how to get what you want out of the other end.
Here we are loading the user information, then passing the id from that object to another two promises. The reason the two on the inside are nested is that they both need access to the user object. This also leads to another potential problem because the inner promises are executed sequentially, and in our case, they don't need to be.
This solves the problem in a pretty neat way, we wrap the two inner calls together in a Promise.all() to get them to execute in parallel. Notice how we take advantage of array destructuring to pass in the results together, this is a very useful little pattern.
There may be a specific requirement for this to be changed into an object rather than an unwrapped array, in this case maybe you could wrap both calls in a method with a .then() map.
It's up to you really where the different conversions and abstractions go, I'd advise being careful of having lots of little functions to wrap things up as it leads to code being hard to follow.
But what if we want to merge all of the data together into a single object, you can, of course, do this by nesting all the promises into one inner promise to share the data, but what if we are trying to avoid doing that?
In this case, we change all of the functions to take and return a context object that is updated by each function, this can seem a desirable way to do things but I would recommend against it. Firstly now every method isn't really a pure loading method its kind of a loadUserMessagesAndUpdateContext() method, while if you use this pattern lots I'm sure people wouldn't be too confused unless they hit issues with the way this is going to execute. If you look at my examples they will update the context object before they resolve. Most of the time this probably won't be a problem, but it could definatly give someone a headache.
Join me next time when we look at async/await!
Tuesday, 7 August 2018
Promises in node.js: Helper functions
Today we will have a look at some of the cool helper methods provided by the native promise framework. The first couple are just quick helpers to create promises in different states.
These methods quickly wrap values in promises of either resolved or rejected state. This will then execute the path that that promise would normally go down (i.e. .then() for resolved, .catch() for rejected).
In this example we use a made up httpGet function and pass in urls to get an array of returned data. The simplest way to think about Promise.all is that it takes an array of promises and returns a single promse that resolves when all of the promises in the array are complete.
This helper is similar to all except as the name suggests it just returns the first one that completes, this could potentially be a way of implementing timeouts on things, I find myself using Promise.all() much more frequently, but still it is good to understand!
Finally blocks are useful to help you close down resources no matter what happens.
Just a quick note to show that promises can have .then() put on them after they complete and the chain will still be executed. The reason for attaching this inside of another .then() was so that the eventloop would get a chance to fire before attaching the new one.
More promises coming soon!
These methods quickly wrap values in promises of either resolved or rejected state. This will then execute the path that that promise would normally go down (i.e. .then() for resolved, .catch() for rejected).
In this example we use a made up httpGet function and pass in urls to get an array of returned data. The simplest way to think about Promise.all is that it takes an array of promises and returns a single promse that resolves when all of the promises in the array are complete.
This helper is similar to all except as the name suggests it just returns the first one that completes, this could potentially be a way of implementing timeouts on things, I find myself using Promise.all() much more frequently, but still it is good to understand!
Finally blocks are useful to help you close down resources no matter what happens.
Just a quick note to show that promises can have .then() put on them after they complete and the chain will still be executed. The reason for attaching this inside of another .then() was so that the eventloop would get a chance to fire before attaching the new one.
More promises coming soon!
Monday, 6 August 2018
Promises in node.js: Error Handling
Continuing on with the promise theme, let's have a look at error handling.
When executing this you will get a warning, that starts with:
UnhandledPromiseRejectionWarning: This is a rejected Promise
The warning also tells you that the error was not handled by a .catch() block.
In this case we get a similar message: UnhandledPromiseRejectionWarning: Error: An error happened inside the promise, but not rejected.
Both cases can be handled by adding a catch block to the code.
To add an error handler we just add a .catch() block to the end of the promise execution chain, if reject is called then the parameters passed to reject will come out of the catch handler function.
If an error is thrown then the error will come out of the handler function. So we can see how .catch() blocks are very useful but you need to be careful about nesting promises inside of each other.
In this case, we get an unhandled promise rejection message again, because the .catch() block will not pick up the rejected promise. To solve this we can do a couple of things, in most cases, you can just bubble the promise out of the inner .then() block.
The error handling will now work at a higher level because the promise chains are connected. But sometimes we may want to do some additional handling first.
It is still important to connect the chains by returning the inner promise chain or the outer handler won't fire. When a promise is returned from inside a then block the new promise state will be dependent on the inner one.
In this case, the inner error block will fire but not the outer, as no error was thrown. And the outer .then() block will still fire!
So if you do want to execute both catch blocks make sure to throw an error in the inner one, how much of a good practice this is I'm not sure but it helps to understand what state the promises are in.
When executing this you will get a warning, that starts with:
UnhandledPromiseRejectionWarning: This is a rejected Promise
The warning also tells you that the error was not handled by a .catch() block.
In this case we get a similar message: UnhandledPromiseRejectionWarning: Error: An error happened inside the promise, but not rejected.
Both cases can be handled by adding a catch block to the code.
To add an error handler we just add a .catch() block to the end of the promise execution chain, if reject is called then the parameters passed to reject will come out of the catch handler function.
If an error is thrown then the error will come out of the handler function. So we can see how .catch() blocks are very useful but you need to be careful about nesting promises inside of each other.
In this case, we get an unhandled promise rejection message again, because the .catch() block will not pick up the rejected promise. To solve this we can do a couple of things, in most cases, you can just bubble the promise out of the inner .then() block.
The error handling will now work at a higher level because the promise chains are connected. But sometimes we may want to do some additional handling first.
It is still important to connect the chains by returning the inner promise chain or the outer handler won't fire. When a promise is returned from inside a then block the new promise state will be dependent on the inner one.
In this case, the inner error block will fire but not the outer, as no error was thrown. And the outer .then() block will still fire!
So if you do want to execute both catch blocks make sure to throw an error in the inner one, how much of a good practice this is I'm not sure but it helps to understand what state the promises are in.
Sunday, 5 August 2018
Promises in node.js
As mentioned in my callbacks post some of the issues with callbacks can be solved with promises. Also once you understand how the core promise methods work you can use a few tricks to make your code look much nicer.
Firstly we execute our promise returning method "writeFile", this returns a promise that we can call .then() on. The function that we pass into .then() will be executed once the asynchronous code has finished executing much like a callback.
In this example we can see that data can be passed from the promise to the .then() function and then passed into another promise, but doesn't it look like we are going down the same too much indenting route in the code?
One of the cool things about promises is that they pass whatever is returned out of a .then() function out as a promise. Let's take a look at a couple of examples to make this clear.
We can see in this example that the text "TESTING" is returned out and wrapped in a promise, the second .then() function will receive this data as a parameter.
In this example the second block returns another promise already so that promise will be returned from the first .then(), allowing the second .then() to trigger after the first completes. This is a simple yet powerful mechanism that helps us to write more straight line code around promises and mix asynchronous and synchronous then blocks in a way where the intent of the flow is clear. And it should all work nicely as long as we remember to return the promises out of the .then() expressions.
One of the other things you will quickly need to do is create your own promises, you can wrap callback code in promises to make your life easier.
In this example, we have a function that creates a new promise wrapping the fs.writeFile function, in its simplest form a promise is an object that takes an expression with 2 parameters, resolve and reject. We can execute asynchronous code inside of the block and just make sure to call resolve when we are completed or reject in case of error/failure. In this case we implement a callback on fs.writeFile that calls resolve when completed.
We can also use util.promisify(), this function will take a method with a callback and return the same method wrapped in a promise. It should definitely work for all of the core node functionality and probably most other things, but be cautious with non node functions (from libraries), as they could implement a different signature and not work.
So yeah that's pretty awesome! more on promises coming soon! and probably some async/await!
Firstly we execute our promise returning method "writeFile", this returns a promise that we can call .then() on. The function that we pass into .then() will be executed once the asynchronous code has finished executing much like a callback.
In this example we can see that data can be passed from the promise to the .then() function and then passed into another promise, but doesn't it look like we are going down the same too much indenting route in the code?
One of the cool things about promises is that they pass whatever is returned out of a .then() function out as a promise. Let's take a look at a couple of examples to make this clear.
We can see in this example that the text "TESTING" is returned out and wrapped in a promise, the second .then() function will receive this data as a parameter.
In this example the second block returns another promise already so that promise will be returned from the first .then(), allowing the second .then() to trigger after the first completes. This is a simple yet powerful mechanism that helps us to write more straight line code around promises and mix asynchronous and synchronous then blocks in a way where the intent of the flow is clear. And it should all work nicely as long as we remember to return the promises out of the .then() expressions.
One of the other things you will quickly need to do is create your own promises, you can wrap callback code in promises to make your life easier.
In this example, we have a function that creates a new promise wrapping the fs.writeFile function, in its simplest form a promise is an object that takes an expression with 2 parameters, resolve and reject. We can execute asynchronous code inside of the block and just make sure to call resolve when we are completed or reject in case of error/failure. In this case we implement a callback on fs.writeFile that calls resolve when completed.
We can also use util.promisify(), this function will take a method with a callback and return the same method wrapped in a promise. It should definitely work for all of the core node functionality and probably most other things, but be cautious with non node functions (from libraries), as they could implement a different signature and not work.
So yeah that's pretty awesome! more on promises coming soon! and probably some async/await!
Friday, 3 August 2018
Callbacks in node.js
Dealing with asynchronous execution is a big part of writing code nowadays, there are many different ways languages allow you to do this but I am a big fan of how node.js does it. Your code in node.js only executes in a single thread, this makes reasoning about it so much easier and it uses some simple mechanisms to make a single thread more than enough to run the average web server.
To do this any slow running operation you start will be run on another thread that you have no control over.
This would output "I happen before it is finished" before outputting "finished writing file". This is caused by the callback function being queued for execution on the main event loop as soon as the operation completes. One thing to be careful of is keeping your execution running, this will block anything else from happening in your code.
Because this loop keeps executing it stops the event loop from being able to run anything else and basically takes the web server down until you release control. This is why you should avoid using the sync methods that are provided by many APIs as they will block the event loop while they run.
So this is all well and good, and one of the things I like about this approach is it is reasonably simple to understand and get started using node.js. But as time goes on you will probably find some problems.
You can already see the way that this is going, most people call it callback hell, when the code just keeps indenting as you get more callbacks, and you can manage this by wrapping some of the functionality into your own functions but it can still be hard to manage. Especially when most of your functionality is orchestrating other slow-moving parts.
This doesn't really solve the problem in my opinion as it starts to make the flow of the code very hard to follow jumping around the file and also dealing with lots of callbacks. There is one other problem with the basic callbacks that you will most likely hit before long.
While this is better than some of the ways I have seen to achieve this, it's not particularly friendly to follow and it also executes once at a time, which is great if that's a requirement but not so good if it isn't.
I am by no way saying you shouldn't use callbacks, they are very simple and easy to get started within a new node.js application, but quickly you will probably want to start looking into promises (I'll cover these in a blog soon!). Personally, I usually start off with callbacks and migrate to promises when needed, slight inconsistencies don't worry me too much in this case, but as time goes on I do find the promise interfaces nicer for most things and am mostly using them.
To do this any slow running operation you start will be run on another thread that you have no control over.
This would output "I happen before it is finished" before outputting "finished writing file". This is caused by the callback function being queued for execution on the main event loop as soon as the operation completes. One thing to be careful of is keeping your execution running, this will block anything else from happening in your code.
Because this loop keeps executing it stops the event loop from being able to run anything else and basically takes the web server down until you release control. This is why you should avoid using the sync methods that are provided by many APIs as they will block the event loop while they run.
So this is all well and good, and one of the things I like about this approach is it is reasonably simple to understand and get started using node.js. But as time goes on you will probably find some problems.
You can already see the way that this is going, most people call it callback hell, when the code just keeps indenting as you get more callbacks, and you can manage this by wrapping some of the functionality into your own functions but it can still be hard to manage. Especially when most of your functionality is orchestrating other slow-moving parts.
This doesn't really solve the problem in my opinion as it starts to make the flow of the code very hard to follow jumping around the file and also dealing with lots of callbacks. There is one other problem with the basic callbacks that you will most likely hit before long.
While this is better than some of the ways I have seen to achieve this, it's not particularly friendly to follow and it also executes once at a time, which is great if that's a requirement but not so good if it isn't.
I am by no way saying you shouldn't use callbacks, they are very simple and easy to get started within a new node.js application, but quickly you will probably want to start looking into promises (I'll cover these in a blog soon!). Personally, I usually start off with callbacks and migrate to promises when needed, slight inconsistencies don't worry me too much in this case, but as time goes on I do find the promise interfaces nicer for most things and am mostly using them.
Thursday, 2 August 2018
Dictionaries for constructing condition-less logic
While I don't recommend removing all conditional logic from your systems, lots of conditions can often be a sign in my mind that the design does not match the problem being solved. Normally I would expect freshly refactored code to be lower on conditional logic and for it to increase heading up to the next run of the design being refactored.
We can start to move the conditional code out into a dictionary, this makes it more open closed principle compliant and less risky to add new functionality to the system. It's great if you've identified that this area keeps changing.
We can take this further and create registration methods so that if we are in a shared piece of code the upper levels or consumers of the code can add additional functionality that will have access to things defined at a much higher level.
In the long run, some of these patterns can be used with care to reduce the amount of confusing logic that is all in one place. so that the core flow of the application is easier to understand.
We can start to move the conditional code out into a dictionary, this makes it more open closed principle compliant and less risky to add new functionality to the system. It's great if you've identified that this area keeps changing.
We can take this further and create registration methods so that if we are in a shared piece of code the upper levels or consumers of the code can add additional functionality that will have access to things defined at a much higher level.
In the long run, some of these patterns can be used with care to reduce the amount of confusing logic that is all in one place. so that the core flow of the application is easier to understand.
Wednesday, 1 August 2018
Blogging Update
So you may have noticed that my blogging has increased significantly of late, last month I made more blog posts than I had in the previous 4 years. It has been really rewarding and has lead to a lot of interesting conversations with people about some of the topics posted.
Why the sudden increase? Well, I have always wanted to blog more but really struggled to get them out. There were many draft posts in different states of completeness sitting unpublished on my blog, so it was time for a different approach, first downsize posts to make them easier to write, then try to develop a daily habit of shipping a post every day. By reducing my size and quality goals I have managed to produce a lot more content and I can work on increasing the quality over time. Is it really quality if no one sees it?
I've managed to quickly resurrect some of the half-written posts and break them into small series as well by just changing my mindset to at least ship something every day. And while I managed most days last month I started on the second and I did have one day where I was ill and didn't write a blog.
It's also strange because I watch quite a bit of fun fun function and he posts every Monday morning and I remember watching one of his videos and it was quite short and I was like man, way to phone it in. But now I think about it people who make this kind of content are just people that are successfully maintaining a habit of regularly producing content, they're not superheroes! Sometimes it's hard to keep up, like today for example when I'm posting in the evening due to being very sleepy this morning!
Well, hopefully I can keep this going for a bit and maybe settle on a slightly different schedule over time, produce and reflect!
Why the sudden increase? Well, I have always wanted to blog more but really struggled to get them out. There were many draft posts in different states of completeness sitting unpublished on my blog, so it was time for a different approach, first downsize posts to make them easier to write, then try to develop a daily habit of shipping a post every day. By reducing my size and quality goals I have managed to produce a lot more content and I can work on increasing the quality over time. Is it really quality if no one sees it?
I've managed to quickly resurrect some of the half-written posts and break them into small series as well by just changing my mindset to at least ship something every day. And while I managed most days last month I started on the second and I did have one day where I was ill and didn't write a blog.
It's also strange because I watch quite a bit of fun fun function and he posts every Monday morning and I remember watching one of his videos and it was quite short and I was like man, way to phone it in. But now I think about it people who make this kind of content are just people that are successfully maintaining a habit of regularly producing content, they're not superheroes! Sometimes it's hard to keep up, like today for example when I'm posting in the evening due to being very sleepy this morning!
Well, hopefully I can keep this going for a bit and maybe settle on a slightly different schedule over time, produce and reflect!
Tuesday, 31 July 2018
Tools and why we use them
Jumping on a new sexy tool/fad are we?
We constantly need to reevaluate the state of play, tooling, practices and languages are constantly changing around us. So assuming what we thought was good yesterday is still good can be a massive downfall, in all honesty, there's a pretty good chance that it wasn't ever good.
I can definitely see a trend where generally when we think something is good we overdo it, next thing you know it is a best practice and we do it everywhere, not just where it makes sense.
I remember switching to dependency injection and suddenly we decided it should be in every class, everything should be injected. Instead of deciding on a case by case basis to use it when we needed to test we just did it everywhere. It results in us not using our brains and just running on autopilot, perhaps we should always fall back to thinking about what things are good for.
I also saw an interesting talk lately on why we shouldn't always use microservices, the main point seemed to be that we should start off with a single application and change to microservices when they would benefit the project, though I completely agree with this I think it can be hard to change to microservices when you haven't done them before so perhaps mandating a practice for a specific application from the start is ok if one of the main goals is to learn the practice.
So my point really is to identify the benefit of a tool or practice and be careful in this identification. I would say architectural i.e. SOLID style reasoning is not good enough to justify a tooling. For example, if you say that dependency injection is for decoupling of dependencies please can we ask what the purpose of the decoupling is. So I would say dependency injection is for allowing us to inject stubs (not mocks!! :/) at test time, so how about we just use it where we want to inject test things. Also with some restructuring just having the consumer pass in the dependencies seems much easier than using a dependency container.
Maybe next time we are trying something we should also try a couple of manual alternatives and evaluate the differences to help us avoid practices that we can do without.
We constantly need to reevaluate the state of play, tooling, practices and languages are constantly changing around us. So assuming what we thought was good yesterday is still good can be a massive downfall, in all honesty, there's a pretty good chance that it wasn't ever good.
I can definitely see a trend where generally when we think something is good we overdo it, next thing you know it is a best practice and we do it everywhere, not just where it makes sense.
I remember switching to dependency injection and suddenly we decided it should be in every class, everything should be injected. Instead of deciding on a case by case basis to use it when we needed to test we just did it everywhere. It results in us not using our brains and just running on autopilot, perhaps we should always fall back to thinking about what things are good for.
I also saw an interesting talk lately on why we shouldn't always use microservices, the main point seemed to be that we should start off with a single application and change to microservices when they would benefit the project, though I completely agree with this I think it can be hard to change to microservices when you haven't done them before so perhaps mandating a practice for a specific application from the start is ok if one of the main goals is to learn the practice.
So my point really is to identify the benefit of a tool or practice and be careful in this identification. I would say architectural i.e. SOLID style reasoning is not good enough to justify a tooling. For example, if you say that dependency injection is for decoupling of dependencies please can we ask what the purpose of the decoupling is. So I would say dependency injection is for allowing us to inject stubs (not mocks!! :/) at test time, so how about we just use it where we want to inject test things. Also with some restructuring just having the consumer pass in the dependencies seems much easier than using a dependency container.
Maybe next time we are trying something we should also try a couple of manual alternatives and evaluate the differences to help us avoid practices that we can do without.
Monday, 30 July 2018
closures
Closures are great! When I first started coding in javascript I wondered how you manage to hide data in your objects... The answer closures!!
Let's take a normal class (oh thanks ES6!)
Now how would we do the same with a closure?
the factory function serves to store all the information inside of its closure when it is executed, so all passed in parameters are available to any functions on the created object. It also makes them inaccessible to anything on the outside. Elegant and simple, no extra syntax required.
After using this pattern for a while in some applications I noticed that we were often only returning a method with a single function attached to it, in these cases is it even necessary to have the object?
In this example, we just return a function that can do the work. This is simpler and it also helps to make it so that the abstraction only performs a single task.
Let's take a normal class (oh thanks ES6!)
Now how would we do the same with a closure?
the factory function serves to store all the information inside of its closure when it is executed, so all passed in parameters are available to any functions on the created object. It also makes them inaccessible to anything on the outside. Elegant and simple, no extra syntax required.
After using this pattern for a while in some applications I noticed that we were often only returning a method with a single function attached to it, in these cases is it even necessary to have the object?
In this example, we just return a function that can do the work. This is simpler and it also helps to make it so that the abstraction only performs a single task.
Sunday, 29 July 2018
Test First Pipeline
So we've gone from testing after the code has been merged to master to testing before it is committed and it's great. But I think we can go further, what I suggest is that we test before we develop, much like TDD, testers and the PO would design UI/Acceptance tests before it is coded. Then the tests are run before the code is merged, much like TDD this helps to guarantee the code is testable before it is created.
This would also require stronger working between developers, testers and business analysts, hopefully resulting in a good working environment focused on quality. I think this mentality would also be good for teams, always thinking about how something is going to be tested and the impact on quality, as many times in development the issues arise because of one role not sufficiently supporting the others.
The whole team can also decide on the scope that requires testing if there is to be any manual testing, what parts are likely to break etc.
This would also require stronger working between developers, testers and business analysts, hopefully resulting in a good working environment focused on quality. I think this mentality would also be good for teams, always thinking about how something is going to be tested and the impact on quality, as many times in development the issues arise because of one role not sufficiently supporting the others.
The whole team can also decide on the scope that requires testing if there is to be any manual testing, what parts are likely to break etc.
Saturday, 28 July 2018
Friday, 27 July 2018
Solo Programming
Solo coding is a new practice where a developer sits down and puts their headphones in and works completely on their own. This allows for higher concentration and individual focus levels just not achievable in group working! It means the team can output more work! Every developer can maximise their output!
Some developers dislike pair programming, it's not a bad sign on them. There are a lot of benefits to mobbing or pair programming. But I have days when it's just nice to put my headphones in and get on with stuff, maybe this is bad? I mean now I've gone and added less reviewed code that only I understand.
The first step is to make sure that the developers understand the benefits of shared work, it may seem slower at first than solo work but after you get some practice you will find that the work starts to speed up and you become way more efficient working together. Also, it is about the best way to share knowledge and reduce risk as a team. When a single person writes or works in an area they are the only person that knows that area, meaning that others will misunderstand and potentially cause issues when working there. Knowledge gaps can lead to failures and big problems with people having time off.
I would suggest that to start we begin by timeboxing the pairing activity, its a completely new way of working and it will take some time to get used to. Also, you must remember to strictly follow the rules, pair programming or mobbing is not just sitting around the computer working together, there is a strict format to follow. One person should be on the keyboard and the other(s) should be navigating, this means the person on the keyboard should follow instructions from the navigator(s) not just get on with the work and explain. The navigator(s) should be telling the driver what to do, they are in control at this point, follow this with extreme discipline! Also, the position should swap regularly, I recommend 5/10 mins then swap.
So give it a go trying to follow the rules properly, after a while you can adjust the numbers if you want, and start extending the time pairing. I still have mixed feelings about how much I enjoy group work, but I can clearly see it is a benefit, so trying to find a good balance of doing it regularly feels important, also I read that some companies are trying to vet out people who don't like pairing in the interview process nowadays...
Some developers dislike pair programming, it's not a bad sign on them. There are a lot of benefits to mobbing or pair programming. But I have days when it's just nice to put my headphones in and get on with stuff, maybe this is bad? I mean now I've gone and added less reviewed code that only I understand.
The first step is to make sure that the developers understand the benefits of shared work, it may seem slower at first than solo work but after you get some practice you will find that the work starts to speed up and you become way more efficient working together. Also, it is about the best way to share knowledge and reduce risk as a team. When a single person writes or works in an area they are the only person that knows that area, meaning that others will misunderstand and potentially cause issues when working there. Knowledge gaps can lead to failures and big problems with people having time off.
I would suggest that to start we begin by timeboxing the pairing activity, its a completely new way of working and it will take some time to get used to. Also, you must remember to strictly follow the rules, pair programming or mobbing is not just sitting around the computer working together, there is a strict format to follow. One person should be on the keyboard and the other(s) should be navigating, this means the person on the keyboard should follow instructions from the navigator(s) not just get on with the work and explain. The navigator(s) should be telling the driver what to do, they are in control at this point, follow this with extreme discipline! Also, the position should swap regularly, I recommend 5/10 mins then swap.
So give it a go trying to follow the rules properly, after a while you can adjust the numbers if you want, and start extending the time pairing. I still have mixed feelings about how much I enjoy group work, but I can clearly see it is a benefit, so trying to find a good balance of doing it regularly feels important, also I read that some companies are trying to vet out people who don't like pairing in the interview process nowadays...
Thursday, 26 July 2018
Test Software Like AI
It seems to me that in the future the role of an application developer in many markets will revolve much less around describing the functionality of an application in code and much more in training AI systems. This splits the developer's role into two main areas, creating the training data and testing that the AI is performing tasks correctly. As often the solutions that the AI may come up with are hard to understand and reason about we may not even bother. If we can test it well enough we just test that it achieves all of its goals.
So why don't we start testing applications in this way now, it's just an extension of where we are naturally going with Test Driven Development. If we write the tests from a black box point of view that confirm the application or significant subsystems function as desired. If we test well enough then the tests become much more valuable than the application its self in many ways. The application can be completely rewritten and we can guarantee that it works in as many cases as possible.
It seems very likely that as AI allows us to develop systems much faster and with much less understanding of how the system actually functions that the role of dev-testers will become the main job in a development team. These engineers will still code, building tools to help them test the applications as I believe it is very hard to make a generic test framework for all applications that can function as well as specifically designed tooling that takes into account the domain.
The machines are coming, let's make sure they do what we want :)
So why don't we start testing applications in this way now, it's just an extension of where we are naturally going with Test Driven Development. If we write the tests from a black box point of view that confirm the application or significant subsystems function as desired. If we test well enough then the tests become much more valuable than the application its self in many ways. The application can be completely rewritten and we can guarantee that it works in as many cases as possible.
It seems very likely that as AI allows us to develop systems much faster and with much less understanding of how the system actually functions that the role of dev-testers will become the main job in a development team. These engineers will still code, building tools to help them test the applications as I believe it is very hard to make a generic test framework for all applications that can function as well as specifically designed tooling that takes into account the domain.
The machines are coming, let's make sure they do what we want :)
Wednesday, 25 July 2018
Asking Questions
The goal is a really great book, not only do you get the lean approach to running a business but also the approach that the mentor character takes to teaching is one that seems really effective. He always answers questions with more questions forcing the main character to really think about what the answers to his own problems are.
Upon further research this approach appears to be based upon the Socratic Method, by Socrates who was born in 470BC. So why have we adopted it so little? I guess it because it is very hard to retrain yourself to ask questions instead of give answers, maybe there is something satisfying in appearing smart because you know the answer. But there is surely more value in asking the correct questions, the learner will start to develop better analytical patterns to follow when they have questions, and this should free up time for the teacher as much less explanation is required.
But what questions should you ask as the teacher, one question suggested by a colleague of mine when a junior team member approaches them is to ask them what they have tried so far, this seems to be a good starting point. From here do we ask them a question to highlight the area they should be looking at or do we ask a question to get them to think about the different areas the problem could be in. I would suggest the latter as surely the goal of teaching is to make the student independent of the teacher.
Time to read some plato I think!
Upon further research this approach appears to be based upon the Socratic Method, by Socrates who was born in 470BC. So why have we adopted it so little? I guess it because it is very hard to retrain yourself to ask questions instead of give answers, maybe there is something satisfying in appearing smart because you know the answer. But there is surely more value in asking the correct questions, the learner will start to develop better analytical patterns to follow when they have questions, and this should free up time for the teacher as much less explanation is required.
But what questions should you ask as the teacher, one question suggested by a colleague of mine when a junior team member approaches them is to ask them what they have tried so far, this seems to be a good starting point. From here do we ask them a question to highlight the area they should be looking at or do we ask a question to get them to think about the different areas the problem could be in. I would suggest the latter as surely the goal of teaching is to make the student independent of the teacher.
Time to read some plato I think!
Tuesday, 24 July 2018
Review Apps
Testing is very important, it helps us to increase stability, but in an iteration we often build up a load of work into our master/dev branch and then have a stabilisation period, if the testers are going very well then we are kind of always slightly unstable anything just added is not tested and often not looked at by the PO/BA who requested it until a demo or it is released. If the test team is struggling to keep up then what we end up with is a wave of instability, often leading to iterations more focused on bug fixing.
Surely one of the points of agile is to be able to release at any point, having a couple of develop iterations then a stabilisation one is kind of like having one really big iteration. If I remember correctly then the idea of an iteration is that is releasable every iteration at a minimum.
So how to solve this? Review apps! We started this practice when we moved to docker using gitlab, basically docker allowed us to commission environments much faster so we could deploy the application much more easily. So every pull request that gets created can be tested by a tester and reviewed by the person who asked for the change prior to it being merged, this significantly helps to increase the stability of the app.
It can be achieved without docker in my thinking by just having an environment for each person of interest, for example a tester can have their own environment and just deploy the builds that happen automatically on every branch to their environment for testing, then they just add a tag saying that they approve this.
There can be some issues due to things like needing a lot of data in a system to test or perhaps database migrations. These can be stumbling blocks to getting this working, but there are ways round this through good seed data, or by building tools to quickly push test data into the system. It may seem like this could take a lot of time but in my opinion it is worth the effort.
Going forward we are going to look at adding automated smoke testing to the review apps as well, if every pull request is tested, smoke tested and reviewed by the person asking for it hopefully this should lead to us having an extremely stable and releasable master/dev branch as well as helping to guarantee we are building what was originally asked for.
Surely one of the points of agile is to be able to release at any point, having a couple of develop iterations then a stabilisation one is kind of like having one really big iteration. If I remember correctly then the idea of an iteration is that is releasable every iteration at a minimum.
So how to solve this? Review apps! We started this practice when we moved to docker using gitlab, basically docker allowed us to commission environments much faster so we could deploy the application much more easily. So every pull request that gets created can be tested by a tester and reviewed by the person who asked for the change prior to it being merged, this significantly helps to increase the stability of the app.
It can be achieved without docker in my thinking by just having an environment for each person of interest, for example a tester can have their own environment and just deploy the builds that happen automatically on every branch to their environment for testing, then they just add a tag saying that they approve this.
There can be some issues due to things like needing a lot of data in a system to test or perhaps database migrations. These can be stumbling blocks to getting this working, but there are ways round this through good seed data, or by building tools to quickly push test data into the system. It may seem like this could take a lot of time but in my opinion it is worth the effort.
Going forward we are going to look at adding automated smoke testing to the review apps as well, if every pull request is tested, smoke tested and reviewed by the person asking for it hopefully this should lead to us having an extremely stable and releasable master/dev branch as well as helping to guarantee we are building what was originally asked for.
Monday, 23 July 2018
No More Iterations
Once you start doing continuous delivery what is the value in doing iterations? Surely one of the reasons we go for smaller iterations is to allow work to be reprioritised every week. So once we deliver when a story is complete can we not just allow work to be changed straight away.
I see it as the PO is in charge of the backlog and they can change work whenever they want, just not change the work someone is currently working on. You can still track work based on a fixed timescale if you want velocity, and do demos and retrospectives on a fixed timescale, we can merge planning and backlog grooming/refinement into a single session where we estimate and review work this can be fixed but can also just be arranged ah hoc by the PO when a session is required.
Hopefully this should give us greater flexibility to respond to change and go from plan to reality. Though there might be some difficulty hitting your tooling to work like this, but you can still set a time period in them I guess... It would kind of be nice to have a tool with a in progress area and after a story is completed once a certain amount of time has passed it goes into the completed log.
I see it as the PO is in charge of the backlog and they can change work whenever they want, just not change the work someone is currently working on. You can still track work based on a fixed timescale if you want velocity, and do demos and retrospectives on a fixed timescale, we can merge planning and backlog grooming/refinement into a single session where we estimate and review work this can be fixed but can also just be arranged ah hoc by the PO when a session is required.
Hopefully this should give us greater flexibility to respond to change and go from plan to reality. Though there might be some difficulty hitting your tooling to work like this, but you can still set a time period in them I guess... It would kind of be nice to have a tool with a in progress area and after a story is completed once a certain amount of time has passed it goes into the completed log.
Sunday, 22 July 2018
Pull Request Guide
Review for logic errors
Have they considered nulls in likely cases, does the way it uses and interacts with other parts of the system make sense? If they have used patterns do they make sense or are there any they should have used?Review for architectural issues
Is the seperation of code done well, think about SOLID and coupling and cohesion. Often at this point its worth just asking them questions about their thoughts on it and what their approach was. Just make sure they have thought about it and their thinking is sound.Review for over engineering
Have they added things that aren't necessary? We are all guilty of this, and they may not need to remove them as they have already been done, you might keep them if they don't cause too many problems. Just let them know for the future, PR's are as much for training as they are code review.Review for readability
Readability is how easy it is to read, not if it conforms to your internal version of what is well styled, does the code have confusing knots of code? i.e. the code is too compressed and hard to comprehend? Not are the line breaks in a consistent manner, this one is hard to call, my main suggestion would be try to care less than you think is important about style as it adds very little and there is a difference between style and readability.Saturday, 21 July 2018
Story Dependency Chains
I'm sure I read somewhere that user stories are supposed to be independent of each other, we eventually took this and changed it to they should be independent of each other in an iteration so that they do not block other work in an iteration.
But surely embracing the dependency of stories gives us a much better estimate of how long it will take before a story is done, we essentially have to look at two metrics, the capacity of the team. i.e. velocity, when taking into account holiday etc how much work we can do in the time frame. And how long the longest dependant chain will take to be done. If there is 3 months of dependent chain time and only 2 months of work then we will still take 2 months.
Another useful measurement could be expansion, how much do our iterations normally expand due to bugs and other important last minute work? Its going to happen, there are always things that pop into the iteration, so rather than trying to pretend it won't happen, lets create a metric from it and we can try to reduce it as much as possible. We can easily estimate bugs or any added stories after they have been done.
So the time something will be shipped in is essentially the amount of effort of it and its dependant stories will take, plus allowance for our average expansion. This also relies on the stories in question being constantly active so either planning them into iterations correctly or not doing iterations at all.
These measurements can hopefully be adjusted though, hopefully by pairing/mobbing dependent chain work we can decrease the time it takes to get the work done.
Yeah didn't really gif this one...
But surely embracing the dependency of stories gives us a much better estimate of how long it will take before a story is done, we essentially have to look at two metrics, the capacity of the team. i.e. velocity, when taking into account holiday etc how much work we can do in the time frame. And how long the longest dependant chain will take to be done. If there is 3 months of dependent chain time and only 2 months of work then we will still take 2 months.
Another useful measurement could be expansion, how much do our iterations normally expand due to bugs and other important last minute work? Its going to happen, there are always things that pop into the iteration, so rather than trying to pretend it won't happen, lets create a metric from it and we can try to reduce it as much as possible. We can easily estimate bugs or any added stories after they have been done.
So the time something will be shipped in is essentially the amount of effort of it and its dependant stories will take, plus allowance for our average expansion. This also relies on the stories in question being constantly active so either planning them into iterations correctly or not doing iterations at all.
These measurements can hopefully be adjusted though, hopefully by pairing/mobbing dependent chain work we can decrease the time it takes to get the work done.
Yeah didn't really gif this one...
Friday, 20 July 2018
Microservices
By making our micro services stateless we make them more testable and scaleable, a stateless service must take some input data and return an output, being as it doesn't store any state. This is easier to test as we can essentially write the data that we pass in and check what it passes back for each test case.
Where does my state go?
Most applications will likely still need to have quite a large amount of data that is stored in a storage mechanism (i.e. document database or sql), but if we can move these interactions to the start and end of our service chains then it should enable us to still have as much functionality as possible in easily testable services.
- Load data from store (this service is simple as all it does is load)
- send to service to do work (this service is easily tested)
- Save any data required (again should be simple)
Micro services enable us to work in new ways
Where does my state go?
Most applications will likely still need to have quite a large amount of data that is stored in a storage mechanism (i.e. document database or sql), but if we can move these interactions to the start and end of our service chains then it should enable us to still have as much functionality as possible in easily testable services.
- Load data from store (this service is simple as all it does is load)
- send to service to do work (this service is easily tested)
- Save any data required (again should be simple)
Micro services enable us to work in new ways
- When the application is split up smaller different services can take advantage of different programming languages/frameworks and storage mechanisms.
- For larger projects teams can work on separate services and release individually of each other.
- Because we are using multiple pieces to make up our application we can run them on different hardware and increase the amounts we run of individual services to scale out.
Wednesday, 18 July 2018
Value
Sometimes it can be difficult to determine the value that we are adding to a project, it is very easy to get our heads trapped at the abstraction level that we work at, for example as a developer I have previously invested lots of time in formatting code bases to new standards, at the time I believed this was adding lots of value to the project, because I was thinking about the project like a developer thinks about the project, I believe this is a stage many developers go through in their quest to be great at their job.
If we look at a project and try to determine value, we quickly get to sales as value. That is the only real value that can be delivered, if you are holding up delivery of a product you'd better be damn sure you have a good reason as you are costing the company money. This can be very hard to see with the layers between you writing code and the person collecting the cheque from the customer so disconnected. Unless we put effort into thinking about what is adding value it is very possible that we are all delivering lots of things that are of low or no value.
It can also be hard when trying to shift to this mindset to find where long term value delivery fit it, it's already hard to measure the value of adding a feature to a piece of software, how do we even begin to comprehend the value in spending a day writing documentation for other developers, or helping to train colleagues.
There is loads of great advice out there on development techniques, mobbing, pairing, TDD, XP and so on. But how can we measure the difference that they make to our organisation? What metrics should we use? The only thing I can think is to use story points as a starting point, velocity could be a good metric, but story points are hard and not directly tied to value... perhaps we should measure the value of a story/bug at the same time that we measure the effort required?
If we look at a project and try to determine value, we quickly get to sales as value. That is the only real value that can be delivered, if you are holding up delivery of a product you'd better be damn sure you have a good reason as you are costing the company money. This can be very hard to see with the layers between you writing code and the person collecting the cheque from the customer so disconnected. Unless we put effort into thinking about what is adding value it is very possible that we are all delivering lots of things that are of low or no value.
It can also be hard when trying to shift to this mindset to find where long term value delivery fit it, it's already hard to measure the value of adding a feature to a piece of software, how do we even begin to comprehend the value in spending a day writing documentation for other developers, or helping to train colleagues.
There is loads of great advice out there on development techniques, mobbing, pairing, TDD, XP and so on. But how can we measure the difference that they make to our organisation? What metrics should we use? The only thing I can think is to use story points as a starting point, velocity could be a good metric, but story points are hard and not directly tied to value... perhaps we should measure the value of a story/bug at the same time that we measure the effort required?
Tuesday, 17 July 2018
Bottlenecks
Lately I've been listening to "The Goal", by Eliyahu M. Goldratt. Which is a great novel about managing a business. In summary bottlenecks control how much output a business has, and output generates money.
I've been playing with a couple of theories about how this could relate to software development, firstly the bottlenecks could be roles within the team. For example if you have a lot of work waiting to be tested then the testing could be a bottleneck. I've heard about using WIP limits to manage this, for example you might say only 2 pieces of work can be in the test column of your kanban board at a time. If this happens then people need to help the tester to move work out of the column. This may seem counter productive as developers would be faster at developing, but if there is a limit on the team producing then it doesn't matter how much work in progress stories there are, they aren't done.
Often work in progress, or inventory in factories is seen as having value, but in the book and many other things that I have read they describe this work as being bad rather than good. You can begin to see how this applies in development. Work in progress must have latest merged into it, it can include increased technical debt and it can get in the way of the team moving through other work that is more important.
One of the other ideas I had is that maybe the bottlenecks are individual stories themselves, often I come across certain stories that take longer with individual team members than they should, maybe these stories are problematic for the team member, due to lack of domain/technical knowledge or maybe they have just hit a problem that is causing them to procrastinate somewhat. How about we track the average time for a story (or story with each member), and if the story goes over the average, identify it as a candidate for pairing/mobbing. This could help us to patch weak spots in the development process.
I've been playing with a couple of theories about how this could relate to software development, firstly the bottlenecks could be roles within the team. For example if you have a lot of work waiting to be tested then the testing could be a bottleneck. I've heard about using WIP limits to manage this, for example you might say only 2 pieces of work can be in the test column of your kanban board at a time. If this happens then people need to help the tester to move work out of the column. This may seem counter productive as developers would be faster at developing, but if there is a limit on the team producing then it doesn't matter how much work in progress stories there are, they aren't done.
Often work in progress, or inventory in factories is seen as having value, but in the book and many other things that I have read they describe this work as being bad rather than good. You can begin to see how this applies in development. Work in progress must have latest merged into it, it can include increased technical debt and it can get in the way of the team moving through other work that is more important.
One of the other ideas I had is that maybe the bottlenecks are individual stories themselves, often I come across certain stories that take longer with individual team members than they should, maybe these stories are problematic for the team member, due to lack of domain/technical knowledge or maybe they have just hit a problem that is causing them to procrastinate somewhat. How about we track the average time for a story (or story with each member), and if the story goes over the average, identify it as a candidate for pairing/mobbing. This could help us to patch weak spots in the development process.
Monday, 16 July 2018
Mobbing
Mobbing is when a whole team sit around one computer similar to pair programming and all code together! This practice like pair programming has many advantages and even though it seems like it would slow you down having everyone at the same computer it can actually speed you up as there is a lot more knowledge sharing and focus on the task at hand.
Mobbing is not five people watching and one person coding, like pairing the people who aren't at the keyboard (driving) should be navigating (making the decisions and reviewing the work being done).
Mobbing can be useful for complex tasks and knowledge sharing, making sure that all the people who are required to solve a problem are sitting around the computer. This should make for no wait time when things people have questions.
Mobbing can also be used for learning, grouping together to pick up a new technology or practice. Think about it one the whole team will have a shared understanding rather than one person being a single source of knowledge and failure.
Best of all mobbing makes sure that the work being done is the best the team can do, with every members skills being used and making sure that and problems members have should be spotted by the other members.
Mobbing is not five people watching and one person coding, like pairing the people who aren't at the keyboard (driving) should be navigating (making the decisions and reviewing the work being done).
Mobbing can be useful for complex tasks and knowledge sharing, making sure that all the people who are required to solve a problem are sitting around the computer. This should make for no wait time when things people have questions.
Mobbing can also be used for learning, grouping together to pick up a new technology or practice. Think about it one the whole team will have a shared understanding rather than one person being a single source of knowledge and failure.
Best of all mobbing makes sure that the work being done is the best the team can do, with every members skills being used and making sure that and problems members have should be spotted by the other members.
Sunday, 15 July 2018
Inspiring Creativity
Hey Moron!
People are awesome! Even you!
Why do you think that google offer 20% of work time to work on whatever you want? so they can steal all the awesome ideas? well maybe, but that's beside the point! It also really helps to inspire people's creativity. While it's all good to hire and train your people to be the most super awesome workers ever if they aren't inspired to do work through creativity, pride and self-motivation you're probably wasting a lot of output. Imagine if you could make all your workers increase output by 5%! in a 100 man company that's like hiring 5 more people! You do the math!
While I don't think there is a recipe for making this happen there is a guaranteed way to stop it happening! So let's start from there! Here's what not to do!
Be over controlling! (Micromanaging)
Have no trust in people to do the right thing!
Discourage open discussion!
So next time your colleague suggests an idea, encourage them. Yeah I know they're a moron and it's a terrible idea! But play it through with them and they will either realise it or it might become more than you could imagine it would be =]
Read a Freaking Book
Peopleware - Tom DeMarco
https://www.amazon.co.uk/Peopleware-Productive-Projects-Teams-3rd/dp/0321934113/
Work Rules - Lazlo Bock
https://www.amazon.co.uk/Work-Rules-Insights-Inside-Transform-x/dp/1444792385/
People are awesome! Even you!
Why do you think that google offer 20% of work time to work on whatever you want? so they can steal all the awesome ideas? well maybe, but that's beside the point! It also really helps to inspire people's creativity. While it's all good to hire and train your people to be the most super awesome workers ever if they aren't inspired to do work through creativity, pride and self-motivation you're probably wasting a lot of output. Imagine if you could make all your workers increase output by 5%! in a 100 man company that's like hiring 5 more people! You do the math!
While I don't think there is a recipe for making this happen there is a guaranteed way to stop it happening! So let's start from there! Here's what not to do!
Be over controlling! (Micromanaging)
Have no trust in people to do the right thing!
Discourage open discussion!
So next time your colleague suggests an idea, encourage them. Yeah I know they're a moron and it's a terrible idea! But play it through with them and they will either realise it or it might become more than you could imagine it would be =]
Read a Freaking Book
Peopleware - Tom DeMarco
https://www.amazon.co.uk/Peopleware-Productive-Projects-Teams-3rd/dp/0321934113/
Work Rules - Lazlo Bock
https://www.amazon.co.uk/Work-Rules-Insights-Inside-Transform-x/dp/1444792385/
Saturday, 14 July 2018
Mocking
Some projects seem to encourage the use of mocks, why not? they're really powerful right!! "I can do all kinds of stuff with them!" Personally I find mocks to be at the extreme end of the scale, while they are a really powerful and useful tool, I rarely need them. And why add complexity when it's not required. Even the best setup interfaces are pretty complex.
Most of the time you can get around mocks by restructuring your code to have the execution of the dependencies fired outside of the place the testable logic occurs. The easiest illustration of this is database calls, Load the data and Save the data externally to any manipulation or calculation logic you are doing on this. This means the construct that accepts the data and returns some other data can be tested by passing variables and checking what is returned.
Perhaps the logic needs to call another service to let it know when something happens, realistically I can't say that not using mocks will work in every case. But you should defiantly ask yourself if you can move the calling code into another place that isn't as testable. Perhaps the service could return the message to send to the other service. Do we really get a lot of value out of testing that we passed what we guess is the right thing to the network abstraction. Would this not be better to test in an integration test where we test a call to the real service.
Sometimes the things get to a point when we need to use mocks, but I think people don't often spend enough time trying to avoid them and make simple input output tests.
Another example could be I have a method that add's data to a database, and a method that gets data from a database. I mean I could mock the DB and interaction test that the right looking things were passed. But I'm really testing thats it's implemented how I implemented it. To me personally checking that the code I wrote works in the way I wrote it adds no where near as much value as testing that it does what it's supposed to do, no matter how it does it. In this example we could easily call the add method, then the get method and check that the correct information is returned. We can set the database to in memory mode, or use a memory based abstraction round the database. If your testing is really stringent then you should probably at least have an integration test on top of this that checks the database functionality works as expected as well.
Most of the time you can get around mocks by restructuring your code to have the execution of the dependencies fired outside of the place the testable logic occurs. The easiest illustration of this is database calls, Load the data and Save the data externally to any manipulation or calculation logic you are doing on this. This means the construct that accepts the data and returns some other data can be tested by passing variables and checking what is returned.
Perhaps the logic needs to call another service to let it know when something happens, realistically I can't say that not using mocks will work in every case. But you should defiantly ask yourself if you can move the calling code into another place that isn't as testable. Perhaps the service could return the message to send to the other service. Do we really get a lot of value out of testing that we passed what we guess is the right thing to the network abstraction. Would this not be better to test in an integration test where we test a call to the real service.
Sometimes the things get to a point when we need to use mocks, but I think people don't often spend enough time trying to avoid them and make simple input output tests.
Another example could be I have a method that add's data to a database, and a method that gets data from a database. I mean I could mock the DB and interaction test that the right looking things were passed. But I'm really testing thats it's implemented how I implemented it. To me personally checking that the code I wrote works in the way I wrote it adds no where near as much value as testing that it does what it's supposed to do, no matter how it does it. In this example we could easily call the add method, then the get method and check that the correct information is returned. We can set the database to in memory mode, or use a memory based abstraction round the database. If your testing is really stringent then you should probably at least have an integration test on top of this that checks the database functionality works as expected as well.
Friday, 13 July 2018
Reasoning with SOLID Principles: Dependency Inversion
Dependency Inversion
The dependency inversion principle is the final tool from our SOLID toolbox, basically depend on interfaces rather than other classes. This way the implementation can be switched out as our objects are less coupled together. This little pattern can be very useful when you just plain want to switch the way a dependency is referenced for more menial reasons. Like perhaps you want to reference a class that is in a higher package than you, why not just have the low package define an interface and the high one implement it.Dependency inversion is also pretty related to dependency injection, they are not the same thing or done for exactly the same reason, but dependency injection makes use of the dependency inversion principle to allow to you specify a set of tools you require, allowing the dependency system to select at run time the things that fill your needs. This is very useful for testing.
My main warning about this principle is very similar to the open closed principle, try to avoid over use. You don't need to start with this pattern, if a dependency needs to be inverted it will become clear over time. Refactor to patterns rather than starting with them, and you often end up with simpler code.
Thursday, 12 July 2018
Reasoning with SOLID Principles: Interface Segregation
Interface Segregation
The interface segregation principle is pretty sound, make interfaces small so that a client or user of the interface only needs to implement or use the methods it cares about. This is pretty good thinking, generally small cohesive things are better.But in the long run does this not just lead to being easier with duck typing? I guess maybe that is then an extreme implementation of the interface all together. Yeah we're going to lose the niceness of knowing that if A exists on the interface so does B but hey there's a compromise to everything. But I guess in that case you don't need to separate interfaces up if one client only wants to implement a part.
Yeah i went a bit off track with this one, I'm on a break at a conference and I really don't disagree much with the principle :)
Wednesday, 11 July 2018
Reasoning with SOLID Principles: Liskov Substitution
Liskov Substitution
Liskov is perhaps the most law like SOLID principle, every sub class should be useable as the base. I mean I can't find an example where I don't think this makes sense, probably more importantly now I wonder if I should be using inheritance at all?The complex taxonomies of classes that made this relevant now seem distant in the past, and while inheritance still has much usefulness I feel that there was defiantly a time it was overused in the past. Better to compose objects of each other rather than to make them each other, looser coupling is implied in that relationship. i.e. a car has 4 wheels rather than is a four wheeled vehicle object.
This may seem a trivial difference but really comes into its own when single inheritance is enforced, composition is very take and choose what you want whereas inheritance implies a much deeper relationship, one cannot be a four wheeled vehicle and a two doored vehicle when it comes to base classes. whereas an object can have four wheels and two doors.
So Liskov good! But be careful with those large complex inheritence designs as inheritence is a extreemly tight form of coupling!
Tuesday, 10 July 2018
Reasoning with SOLID Principles: Open Closed
Open Closed Principle
This principle is really great, basically setup your code so that new features are extensions to the existing code and not changes. Although I find it to be really useful I would recommend exercising caution when using it. It is very easy to overuse this principle and end up with a lot of unnecessary code ready to handle perceived future changes.Identify the areas that are susceptible to change and the refactor them to the open closed principle. As a rule of thumb I would say the first time just write the code in the simplest way possible, then when a new feature is required add an if statement to incorporate the change. By the third or fourth time you should really be thinking about changing the code to accommodate future changes.
There are also times when it can be applied straight away, it seems to me a lot of good architecture is identifying which parts of the system are likely to change and planning this into your design. This does not mean you need to spend loads of time identifying which parts of the system these will be before hand and design them in. Refactor your design as they naturally appear in the course of development.
So this principle is yet another great thinking tool that should be applied with balance but when in doubt Keep It Simple Stupid.
Monday, 9 July 2018
Reasoning with SOLID Principles: Single Responsibility
The SOLID Principles are a great tool to help you learn object oriented principles, but after trying to apply them for quite some time I think there are definite boundaries to when and where they should be applied.
I'll break this into parts! here is part 1!
Say I have an Order object, my order object contains things that are order related, perhaps and ID for the order and some methods to update the order and send the order. But if someone ended up adding a method that draws an alert to the UI, this perhaps would stand out.
get_order_id()
update_order()
get_order_items()
draw_ui_alert()
I wouldn't normally name like this, just trying to make it clear that all the methods apart from the draw_ui_alert clearly relate to the order.
Perhaps I add a method to print the order, to start this method is small and just outputs the order id, this makes sense within the SRP right? The responsibility of the object is to manage the order, the responsibility of the print function is to print the order. We can see that if we were to make the print function also add a new item to the order that this could be a violation of the principle. But what about when the print method grows very large because it also includes a lot of code that describes how printing works, not necessarily related to the printing of the order.
Is there two responsibilities there sounds like it, but when that code was smaller it didn't feel like there was, so surely the principle has to be used in conjunction with balancing the size of things. So now we look at our print method the first line initialises a printer object... well thats a single responsibility depending on the abstraction level we are thinking about right, being responsible for adding a and b together could be a single responsibility. I'm not hating on the principle, just that it sounds like a simple rule but in practice is much more a great tool for reasoning about if something does too much, or how to split it when it is too large.
I guess for quite a few of the principles that is the key, knowing when to apply them and how. But also just using them as thought tools :D
I'll break this into parts! here is part 1!
Single Responsibility Principle
Single responsibility is a great tool for quickly noticing when you've got too much stuff in your stuff, if its obvious that there are two very different responsibilities in an object it can be worth separating them to make things clearer. The issues you hit with this really as you have to be careful about thinking what abstraction level you are reasoning about when considering this.Say I have an Order object, my order object contains things that are order related, perhaps and ID for the order and some methods to update the order and send the order. But if someone ended up adding a method that draws an alert to the UI, this perhaps would stand out.
get_order_id()
update_order()
get_order_items()
draw_ui_alert()
I wouldn't normally name like this, just trying to make it clear that all the methods apart from the draw_ui_alert clearly relate to the order.
Perhaps I add a method to print the order, to start this method is small and just outputs the order id, this makes sense within the SRP right? The responsibility of the object is to manage the order, the responsibility of the print function is to print the order. We can see that if we were to make the print function also add a new item to the order that this could be a violation of the principle. But what about when the print method grows very large because it also includes a lot of code that describes how printing works, not necessarily related to the printing of the order.
Is there two responsibilities there sounds like it, but when that code was smaller it didn't feel like there was, so surely the principle has to be used in conjunction with balancing the size of things. So now we look at our print method the first line initialises a printer object... well thats a single responsibility depending on the abstraction level we are thinking about right, being responsible for adding a and b together could be a single responsibility. I'm not hating on the principle, just that it sounds like a simple rule but in practice is much more a great tool for reasoning about if something does too much, or how to split it when it is too large.
I guess for quite a few of the principles that is the key, knowing when to apply them and how. But also just using them as thought tools :D
Sunday, 8 July 2018
Git Bisect Scripting!
So sometimes you want to find out what commit a bug was added, even if you have tried git bisect its a pretty manual thing doesn't seem too cool. Let's just go over the basics for anyone who hasn't used it before.
Bisect basically allows you to do a binary search over a git history to find when a bug was introduced. To start you will need a known bad commit (normally the latest, or a released version) and good commit, probably the commit the feature was added in when it was working all well and good :)
First enter bisect mode
git bisect start [BAD] [GOOD]
you can use all the usual allowed stuff (branch, commit#, HEAD~2, whatever)
Then manually check if the commit is good or bad and report!
git bisect good
git bisect bad
Also if the project does not build or something you can skip one!
git bisect skip
Then when it tells you what you are looking for you can exit with:
git bisect reset
Right lets automate it! You can run any command that you can run on the shell and have it return the state to bisect.
Exit codes
we just exit the script with the following numbers to tell bisect what state the commit is in
I'll make a little noddy function to test
module.exports = function add (a, b) {
return a + b
}
I then add a few commits and break it on the way,
for this example I put my testing script outside the main git repo so that the checkouts won't have a problem.
try {
const add = require('../bisect_script/')
const result = add(2, 2)
if (result == 4) {
process.exit(0)
} else {
process.exit(1)
}
} catch (err) {
process.exit(125)
}
Bisect basically allows you to do a binary search over a git history to find when a bug was introduced. To start you will need a known bad commit (normally the latest, or a released version) and good commit, probably the commit the feature was added in when it was working all well and good :)
First enter bisect mode
git bisect start [BAD] [GOOD]
you can use all the usual allowed stuff (branch, commit#, HEAD~2, whatever)
Then manually check if the commit is good or bad and report!
git bisect good
git bisect bad
Also if the project does not build or something you can skip one!
git bisect skip
Then when it tells you what you are looking for you can exit with:
git bisect reset
BORING!
Exit codes
we just exit the script with the following numbers to tell bisect what state the commit is in
- GOOD - 0
- BAD - 1
- SKIP - 125
I'll make a little noddy function to test
module.exports = function add (a, b) {
return a + b
}
I then add a few commits and break it on the way,
for this example I put my testing script outside the main git repo so that the checkouts won't have a problem.
try {
const add = require('../bisect_script/')
const result = add(2, 2)
if (result == 4) {
process.exit(0)
} else {
process.exit(1)
}
} catch (err) {
process.exit(125)
}
Then we can run the script against bisect with
git bisect start HEAD HEAD~5
git bisect run node ../bisect_test_Script/test.js
OUTPUT
cce8f28154071789a33a8b101cd11dc6bae2cf33 is the first bad commit
commit cce8f28154071789a33a8b101cd11dc6bae2cf33
Author: Robert Gill <envman@gmail.com>
Date: Sun Jul 8 16:48:30 2018 +0100
more logging: broke it
:100644 100644 e1dbfd4a103f424f065510204f9ae5ff80db1625 5c87270f0517cf407c4d486796899be2d87bc124 M index.js
bisect run success
Just make sure you do git bisect reset after to get back to the starting point.
Code Example (2 repos subject and test script)
https://github.com/envman/bisect_me
https://github.com/envman/bisect_test_script
You may need to update paths in bisect run and the test script to make it work depending on how/where you clone.
To Git! From TFS!
So I found this blog from a few years ago I never posted it and rolled it in glitter, maybe it's useful... also I'm probably going to write some more posts on git soon, so why not give some history...
This is my attempt to help out people that are thinking about migrating to git from TFS, I believe git has many advantages over TFS but have seen many people struggle and complain? when using it for the first couple of weeks. To put in context I have used git for 6+ years but was helping the rest of the company I am working for move over to git.
Branching in git is different from TFS, in TFS you branch a folder and essentially have two versions of that folder that have similar contents. In git you branch the working directory, so you can only see one branch at a time (unless you download the repo twice).
Source Tree
Git Kraken
Command Line - my personal preference, so much tooling now has good cli interface and requires me to hit the terminal, docker, k8, git, node
https://try.github.io/levels/1/challenges/2
http://git-scm.com/book/en/v2
http://roadtoalm.com/2013/07/19/a-starters-guide-to-git-for-tfs-gitwits/
https://channel9.msdn.com/Events/TechEd/NorthAmerica/2013/DEV-B330#fbid=
http://stevebennett.me/2012/02/24/10-things-i-hate-about-git/
http://think-like-a-git.net/
This is my attempt to help out people that are thinking about migrating to git from TFS, I believe git has many advantages over TFS but have seen many people struggle and complain? when using it for the first couple of weeks. To put in context I have used git for 6+ years but was helping the rest of the company I am working for move over to git.
Git for TFS Users
If you just use your source control for checking in and getting latest git is probably going to add some confusion to your workflow. Visual Studio tries to hide the extra steps that are going on under the covers, which is fine when things are going good but will probably lead to you making mistakes because you don't fully understand what is happening when you are performing source control operations.
You're probably going to hate git
Git is not perfect, its complicated and has a horrible learning curve. Here's a site that might help. You might think that you can just switch to git and off you go (how difficult can it be eh?). Try it... go on...
Basic Cycle Differences
TFS
- Get Latest
- Check in
Git
- Fetch
- Merge
- commit
- push
DVCS
Git is a distributed version control system, so it does not necessarily have to have a central repository but i can handle this setup and is probably most used in this way. But it does allow for connecting to multiple "remotes", this means you could directly push between users or setup more complicated systems.
Source Control Usage
When you first start using source control the purpose is quite simple, let me share my code with the people I am working with and track the changes in a way that I can understand what happened when two people are making changes in the same place. So you need a few basic operations:- Push my code in
- Get other peoples code
- Merge when bad times happen =[
Nowdays I find myself wanting quite alot more than I used to
- Quick and easy branching
- Ability to merge locally?
- Private areas for subsets of team to work on same code
- Have my source control help me to find where defects where introduced
- Ways to track previous versions so that they can be patched and managed
Source Control as a Tool
There is so much more that you can do with source control than just check in and checkout files.
- Marking previous versions so that you can bugfix
- Working with subsets of your team on features without affecting the whole team
- Managing check ins via code reviews
- Search through history to find out where errors came from
Tips for changing to git
- Make an effort to learn the differences and what is going on under the covers
- Have someone on standby to fix things when they go wrong
- Practice with a test repository before moving over
Checkout
When you checkout in git the contents of the working directory are changed to whatever commit you are checking out. You maybe are checking out the v0.1 branch. Once this command is run the contents of the repository will be whatever commit the v0.1 branch is pointing to.
Branching
Branching is where git really gets in to its own, its the flexibility and ease of its branching that allows for all the cool workflows and ... that really make git so powerful.Branching in git is different from TFS, in TFS you branch a folder and essentially have two versions of that folder that have similar contents. In git you branch the working directory, so you can only see one branch at a time (unless you download the repo twice).
Tools
Visual Studio - now has pretty decent git supportSource Tree
Git Kraken
Command Line - my personal preference, so much tooling now has good cli interface and requires me to hit the terminal, docker, k8, git, node
References / Further Reading
http://pcottle.github.io/learnGitBranching/https://try.github.io/levels/1/challenges/2
http://git-scm.com/book/en/v2
http://roadtoalm.com/2013/07/19/a-starters-guide-to-git-for-tfs-gitwits/
https://channel9.msdn.com/Events/TechEd/NorthAmerica/2013/DEV-B330#fbid=
http://stevebennett.me/2012/02/24/10-things-i-hate-about-git/
http://think-like-a-git.net/
Friday, 6 July 2018
Git Abuse: Making a git repository without git
notes
I'm doing this on OSX terminal, so will probably work on OSX/Linux, less likely on command prompt. Use CMDER to be happy...First we create an empty folder to house our working directory
mkdir gitdemo
cd gitdemoThen create the .git folder that stores our copy of the repository
mkdir .git
cd .git
Now we start creating some git files, the HEAD file contains a string that says what the HEAD currently points to, HEAD is a pointer to the currently checked out location.
echo "ref: refs/heads/master" > HEAD
Then we create the objects folder
mkdir objects
Then we create the refs folder with the heads folder inside of it
mkdir refs
cd refs
mkdir heads
You can now use this as a working git repository!
Make sure you change back to the gitdemo directory
cd ../..
echo "console.log('hello')" > index.js
git add -A
git commit -m "initial commit"
If you've done everything correctly this should work!
Now if you look inside of the .git folder you can see that git has started adding more things to the objects folder, as well it has created ./.git/refs/heads/master
cat ./.git/refs/heads/master
This should output a commit hash, so we see that this basically says master is on this commit.
I wonder if we can create a branch by just copying this file...
cat ./.git/refs/heads/master ./.git/refs/heads/mybranch
git branch
Now displays
* master
mybranch
But then could we check this out by changing the HEAD file, I mean it won't update the working directory but as they are both pointing at the same commit this should be fine?
echo "ref: refs/heads/mybranch" > HEAD
git branch
Now displays
master
* mybranch
Hopefully this starts to give you an understanding of how gits refs/branches and HEAD work on disk.
Thursday, 5 July 2018
Working with Git Submodules
Git submodules enable you to have repositories inside of each other, this can be a useful mechanism to share code between projects.
Adding a submodule to a project
git submodule init
git submodule add [url] (you can use git submodule init [url] [foldername] to specify folder)
Where [url] is the location of the git repository.
git add -A
git commit -m "Add submodules"
(then push if you want to share!)
This will add a reference to a specific commit to the project.
Downloading submodules in a repository that already has them setup
Clone the repository as normal
git clone [url]
Then init/update the submodules
git submodule init
git submodule update
If you haven't already cloned you can do
git clone --recursive-submodules [url]
updating the submodule
cd into the modules folder
cd mysubmodule
Then use normal git operations, i.e. pull
git pull
then cd back to the main repository and commit the update
git add -A
git commit -m "updated submodule"
Reset submodule to the commit stored in the parent
git submodule update
This will checkout the specific commit that is stored in the parent repository.
Changes in the submodule
When in the submodule folder you can make changes to the modules repository using normal git commands. Just make sure you push then add/commit in the parent repository so that everyone else gets the changes when they do submodule update.
Automatically updating submodules on git pull
You can git to automatically update submodules by adding the following setting to gits config (you could also just set this per repository)
git config submodule.recurse true
Adding a submodule to a project
git submodule init
git submodule add [url] (you can use git submodule init [url] [foldername] to specify folder)
Where [url] is the location of the git repository.
git add -A
git commit -m "Add submodules"
(then push if you want to share!)
This will add a reference to a specific commit to the project.
Downloading submodules in a repository that already has them setup
Clone the repository as normal
git clone [url]
Then init/update the submodules
git submodule init
git submodule update
If you haven't already cloned you can do
git clone --recursive-submodules [url]
updating the submodule
cd into the modules folder
cd mysubmodule
Then use normal git operations, i.e. pull
git pull
then cd back to the main repository and commit the update
git add -A
git commit -m "updated submodule"
Reset submodule to the commit stored in the parent
git submodule update
This will checkout the specific commit that is stored in the parent repository.
Changes in the submodule
When in the submodule folder you can make changes to the modules repository using normal git commands. Just make sure you push then add/commit in the parent repository so that everyone else gets the changes when they do submodule update.
Automatically updating submodules on git pull
You can git to automatically update submodules by adding the following setting to gits config (you could also just set this per repository)
git config submodule.recurse true
Tuesday, 3 July 2018
Javascript packages in JSCore on iOS
unfortunately most javascript packages are either designed to work with a browser or in node.js using the CommonJS module loader. When working in JavascriptCore you aren't really using either of these.
Loading the Scripts
Download the scripts from NPM or via a CDN/Github. Ideally you want it in a single file as this is going to be much easier for you to load.
Browser packages
for a browser package it will normally check to see if it is running on a browser by seeing if the window object exists, and will often add its output to the window object. Duplicating this can be pretty simple, just evaluate a script that says var window = {} before running the packages script.
CommonJS/Node packages
Node packages use the CommonJS module pattern, they often check to see if they are running in a platform that supports this by checking for the existence of module and module.exports objects. You should be able to replicate this by adding var module = {}; var module.exports = exports = {};
You will run into further problems trying to import multiple files that use the commonJS module system, as this system uses a function called require() to load packages from disk.
CommonJS in Javascript Core
In theory you could implement a version of common JS within JSCore by adding a require method that loads and caches the contents of separate files.
Loading the Scripts
Download the scripts from NPM or via a CDN/Github. Ideally you want it in a single file as this is going to be much easier for you to load.
Browser packages
for a browser package it will normally check to see if it is running on a browser by seeing if the window object exists, and will often add its output to the window object. Duplicating this can be pretty simple, just evaluate a script that says var window = {} before running the packages script.
CommonJS/Node packages
Node packages use the CommonJS module pattern, they often check to see if they are running in a platform that supports this by checking for the existence of module and module.exports objects. You should be able to replicate this by adding var module = {}; var module.exports = exports = {};
You will run into further problems trying to import multiple files that use the commonJS module system, as this system uses a function called require() to load packages from disk.
CommonJS in Javascript Core
In theory you could implement a version of common JS within JSCore by adding a require method that loads and caches the contents of separate files.
Subscribe to:
Posts (Atom)