I’ve decided to code a simple CLI version of the 2048 game in python:
Some details of the implementation:
* I’ve handled only the “up” move. For the others I do a rotation of either 90, 180 or 270 degrees the matrix representing the board.
* The core of the algorithm is apply “gravity”, perform the merge with adjacent element with same number and then apply the “gravity” again because my merge function can leave empty spaces.
When adding files to a SVN repository, their type are checked against a list at ./subversion/conf
Files that are not matched raise an error, but only in the commit phase. My problem is that depending on the size of the code you’re commiting, it may take a while until you see the error, in which case you need to revert the added file, add an entry on the subversion/conf and then svn-add the file again.
I’d like know before svn-adding, which files are not matching, so then I can avoid the revert step and also don’t need to wait until commit phase.
I’ve found no solution, but I’ve developed a python script to do that for me. The parsing was hard-coded so maybe one needs to do some tweaks before using it.
In the python code below, k = 4.
# directory with instances
pwd = '/dir/to/instances/'
# executable (w/ path)
prog = '/dir/to/executable/./prog'
from multiprocessing import Pool
p = subprocess.Popen(['ls', pwd], stdout=subprocess.PIPE)
l = p.communicate().split()
l = [x for x in l if os.path.isfile(pwd + x)]
pool = Pool(processes=4)
The syntax is:
assert <boolean>, ["error message"]
Error message is optional. Here’s an example:
a = 1
assert a == 1, "Nope, a is not 1"
assert a == 2, "Nope, a is not 2"
The result is:
Traceback (most recent call last):
File "assert.py", line 3, in <module>
assert a == 2, "no, a is not 2"
AssertionError: no, a is not 2
Python may be an useful tool to parse HTML files.
First thing we need to do is to access the file. For this, we can use python urllib library:
from urllib import urlopen
url = 'http://some.url'
content = urlopen(url).read()
The code above should print the source of the url.
The second part consists in selecting the desired part from the text. Suppose we want to extract the content of a table in the middle of the page. We can use python regular expressions.
pattern = '<tr>.*?</tr>'
m = re.findall(pattern, content)
The code above will return a list in m, of all ocurrences of ‘pattern’. In this pattern, ‘.’ represents any character and ‘*’ means we are interested in 0 or more repetitions. The ‘?’ character means will do a minimal match.
For example, if content was:
<tr>hello</tr> something <tr>world</tr>
The list would be
But if we didn’t include the ‘?’ character, the list would be
['<tr>hello</tr> something <tr>world</tr>']
I made a similar code for a very specific task and I probably won’t use this code again. I was advised to not parse HTML files using regular expressions. An alternative for python is using a XML parsing library, for example, Beautiful Soup.